Uniformed Services University Introduces Web App for Ethical AI Use in Medical Research

Uniformed Services University is pioneering a new interactive web app to train medical and graduate students in the responsible use of artificial intelligence throughout the scientific research process.

Military medical students in uniform look at a laptop
Uniformed Services University faculty recently introduced a novel web app designed to seamlessly integrate
ethical AI into military medical research and education. (Photo credit: Tom Balfour, USU)

April 1, 2026 by USU External Affairs

At the Uniformed Services University (USU), an interactive workshop recently introduced a novel web app designed to seamlessly integrate ethical artificial intelligence (AI) into military medicine research and education. Led by Lt. Col. (Dr.) Joshua Duncan, alongside co-developer Lt. Col. (Dr.) Justin G. Peacock, the session guided students through a series of activities that mirror the lifecycle of a research project. This practical simulation spanned from initial literature reviews to data analysis, culminating in manuscript preparation.

The module focuses heavily on leveraging AI for research to support medical students returning from clerkships who are evaluating residency applications. Peacock “vibecoded” the web app, demonstrating how non-programmers can successfully develop robust software applications using generative AI tools.

"Both Dr. Duncan and I believe strongly that, done right, we can leverage these tools for ethical and effective purposes in a lot of different spaces, including research," Peacock said. "The key is to teach students how to use them in a way that protects their integrity and also their original thought and ideas."

The workshop aligns directly with the medical school’s evolving curriculum, emphasizing the foundational principle that AI must serve as a tool for "enhancement, not replacement". The training spans the entire research workflow, teaching students how to move from initial idea creation and finding strategic gaps to developing specific aims, methodologies, and finalizing projects through copy editing. Crucially, the curriculum requires students to remain actively engaged with their original ideas so they do not relinquish intellectual control to the AI tool.

Pedagogy and Preventing Burnout

The self-contained educational unit employs peer-based learning, faculty-led demonstrations, and comparative examples of appropriate tool usage. Students favored the module’s requirement for short, immediate reflections over the lengthy, delayed reflections often found in standard medical education modules.

Beyond preserving academic integrity, this streamlined learning approach offers a direct operational benefit: preventing burnout. By allowing AI to handle non-essential administrative tasks such as outlining and copy editing, students and faculty can adapt to increasing productivity expectations across the medical field without sacrificing wellness. Leaders anticipate scaling this modular framework to other medical disciplines, including biochemistry, cardiac physiology, and microbiology, creating a new wave of medical education that maintains a strict human-in-the-loop standard.

"This showcases nicely how we can keep the learner in the loop in that process," Peacock said. "We can leverage the tools to accelerate learning in a way that is ethical and effective."

Navigating the "Rules of the Road"

To ensure integrity, the web app expands on specific "Rules of the Road" for responsible AI use. These guidelines are critical for maintaining the high standards of ethics within military medicine:

  • HIPAA Compliance: Never input patient data into an AI tool unless it is explicitly approved for Protected Health Information (PHI). Researchers must remain mindful of HIPAA regulations.
  • Authority: AI should be used for copyediting and outlining rather than ghostwriting.
  • Disclosure and Transparency: Disclose the use of AI in your work to avoid concerns about integrity.
  • “The Hallucination Rule”: Trust but verify all AI output. AI can generate content that sounds credible, but can be completely made up to back its response.

Combating Hallucinations and Bias

Workshop participants explicitly noted the distinct danger of AI hallucinations in a clinical context, where fabricated facts could heavily influence critical medical decisions. One student illustrated this hallucination effect by citing an experience where an AI tool provided an entirely made-up study and falsely attributed it to a famous figure in the field.

To mitigate these operational risks, students learned to provide highly specific parameters to direct the AI toward accurate answers, a technique sometimes referred to as "meta-prompting". Duncan emphasized that researchers must operate under the assumption that all AI-generated citations are hallucinated until they are independently verified against primary literature. Furthermore, the workshop rigorously addressed how AI can accelerate unethical research practices like "P-Hacking," and highlighted how systematic bias is often baked into tools trained on general internet data.

By mastering these digital tools at the microscopic research level, students are better prepared to navigate the macroscopic complexities of the modern healthcare landscape. The ultimate goal of this workshop and tool is to ensure USU graduates remain at the "tip of the spear," leading the necessary change in how AI is harnessed for the future of medicine.