Review of VHA’s Use of Generative Artificial Intelligence
Report Information
Summary
The VA Office of Inspector General (OIG) identified a potential patient safety risk related to the Veterans Health Administration’s (VHA’s) use of generative artificial intelligence (AI) chat tools for clinical care and documentation. Generative AI creates new, original content by learning patterns from existing data. During a national review initiated on October 16, 2025, the OIG found that VHA lacks a formal process to report, track, and respond to safety issues associated with generative AI use. Not having a process precludes a feedback loop and a means to detect patterns that could improve AI tools used in clinical settings.
VHA authorizes two general-purpose AI chat tools, VA GPT and Microsoft 365 Copilot Chat, for use with patient health information. These tools rely on clinical prompts. The output from an AI chat tool can be used to support medical decision-making and copied into the electronic health record. However, generative AI can produce inaccurate outputs, which may affect diagnosis and treatment decisions.
VHA Directive 1050.01(1) requires the Office of Quality Management and the National Center for Patient Safety (NCPS) to provide oversight of VHA quality programs and VHA patient safety programs. Interviews with leaders from VHA’s NCPS and National AI Institute and the Office of Information Technology’s Chief AI Officer team revealed that generative AI chat tools deployment occurred without coordination with NCPS. The OIG is concerned about VHA’s ability to promote and safeguard patient safety.
The OIG continues to monitor this issue and will include further analysis in its final report.