Healthcare AI Governance: Frequently Asked Questions
Last updated: April 2026
Answers to the most common questions from health plans, hospitals, health IT vendors, and specialty pharmacies navigating FDA, ONC HTI-1, CMS, and state AI compliance requirements.
What Is Healthcare AI Governance
- What is AI governance in healthcare and why does it matter now?
-
AI governance in healthcare is the set of documented policies, processes, and controls that govern how an organization selects, approves, deploys, monitors, and retires AI tools in clinical, coverage, and administrative workflows.
It matters now because multiple regulatory deadlines converged in 2025–2026:
- ONC HTI-1 transparency enforcement began in early 2026
- Texas SB 1188 took effect September 2025, requiring licensed practitioner review of AI clinical content
- California AB 489 and AB 316 took effect January 2026
- Colorado's AI Act fully enacts June 2026
- EU AI Act mandatory high-risk AI compliance: August 2, 2026
Under California AB 316, organizations can no longer use "AI acted autonomously" as a liability defense. The organization bears full liability. A documented governance program is now a legal prerequisite, not a best practice.
- What is the difference between AI governance and AI compliance in healthcare?
-
AI governance is the internal organizational framework — committees, policies, approval workflows, and monitoring processes — that controls how AI is used. AI compliance is the external-facing demonstration that your governance program satisfies specific regulatory requirements (FDA, ONC, CMS, state law).
You cannot achieve compliance without governance, but governance alone is not sufficient: you need documented evidence that your governance program maps to each applicable regulatory standard. A committee that meets monthly but produces no documentation is not compliant governance — it is theater. IHS builds both the internal governance infrastructure and the compliance documentation that satisfies external audit requirements.
- What is the NIST AI Risk Management Framework and how does it apply to healthcare?
-
The NIST AI Risk Management Framework (AI RMF 1.0) is a voluntary federal framework providing a structured approach to managing AI risks across four core functions: Govern, Map, Measure, and Manage. In healthcare, it is increasingly referenced as the baseline governance standard by health systems, CMS, and accreditation bodies including the Healthcare AI Governance Standard (HAIGS).
The gap between committee formation and actual framework implementation is striking: 84% of healthcare organizations have established AI governance committees, but only 12% have implemented a formal AI governance framework such as NIST AI RMF (Censinet/CHIME Foundation, 2025). The NIST AI RMF is free to use; implementation labor and consulting are the primary costs. Colorado's AI Act SB24-205 references NIST-aligned risk management practices for its algorithmic impact assessment requirement.
Regulatory Requirements by Agency
- Which federal agencies regulate healthcare AI — and what has each done?
-
Five federal agencies have active healthcare AI regulatory programs:
- FDA: Regulates AI as Software as a Medical Device (SaMD). Approximately 1,200 AI/ML devices cleared or authorized cumulatively; 295 in 2025 alone. Draft guidance on AI/ML-enabled device labeling issued January 2025. Average review time: 150 days. Approximately 97% of AI/ML devices enter via the 510(k) pathway.
- ONC: HTI-1 Final Rule (December 2023) requires transparency for predictive Decision Support Interventions in certified EHRs. USCDI v3 baseline effective January 1, 2026. Enforcement discretion for certain criteria extended to February 28, 2026.
- CMS: Scrutinizes AI in Medicare Advantage prior authorization and coverage determination. Completing Payment Year 2018–2024 RADV audit backlog using AI-driven analytics to detect fraudulent diagnosis code inflation.
- OCR: Enforces HIPAA requirements for PHI used in AI model training, including Minimum Necessary Standard and breach notification obligations for AI-related incidents.
- FTC: Monitors consumer-facing health AI for deceptive practices. Has issued AI guidance applicable to health apps and wellness platforms operating outside HIPAA coverage.
- What is ONC HTI-1 and what does it require from health IT vendors?
-
ONC HTI-1 (Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing Final Rule) was published December 13, 2023. It establishes the first mandatory transparency requirements for predictive Decision Support Interventions (DSIs) in certified health IT.
Under Section (b)(11), certified EHR systems must provide clinical end-users with source attributes for each predictive DSI:
- The intervention's developer identity
- Training data demographics and exclusion criteria
- Known biases in the model
- Intended use parameters and contraindications
USCDI Version 3 became the baseline standard January 1, 2026. ONC extended enforcement discretion for certain criteria through February 28, 2026. Health IT vendors who have not configured source attribute documentation for their predictive DSI features are out of compliance with certified EHR requirements — and their health system customers inherit the compliance exposure.
- What does CMS require for AI used in coverage or prior authorization decisions?
-
CMS has not yet issued a single comprehensive AI prior authorization rule, but enforcement activity is accelerating through several mechanisms. CMS is completing Payment Year 2018–2024 RADV audits using AI-driven analytics — organizations that used AI-assisted coding that inflated diagnosis codes face audit exposure. Colorado HB 1139 (state law, but aligned with CMS direction) prohibits insurance coverage denials based solely on AI recommendations without human review. CMS Interoperability and Prior Authorization rules require documented decision criteria for coverage determinations — AI-generated recommendations must be traceable and auditable. The practical requirement: any AI used in coverage or prior authorization decisions must have documented governance, human oversight protocols, and audit trails that CMS examiners can review.
- What state AI laws affect healthcare organizations?
-
The state AI legislative landscape is active and accelerating. Key laws by state:
- Colorado: SB24-205 (AI Act, full enactment June 2026) requires algorithmic impact assessments and risk management policies for high-risk AI in healthcare decisions. HB 1139 prohibits insurance coverage denials based solely on AI recommendations.
- California: AB 489 (effective January 1, 2026) prohibits AI chatbots from presenting as licensed healthcare professionals. AB 316 (effective 2026) removes the AI-acted-autonomously liability defense — healthcare organizations bear full liability for AI decisions. AB 2013 requires disclosure of AI training data.
- Texas: SB 1188 (effective September 2025) requires licensed practitioner personal review of all AI-generated clinical content before decisions, mandates patient disclosure, and requires U.S.-based EHR storage effective January 1, 2026.
- Utah: SB 149/SB 226 and HB 452 establish AI disclosure requirements applicable to healthcare consumer interactions.
- Illinois: AI laws address employment and consumer-facing AI with healthcare implications.
- Nevada: Emerging AI disclosure legislation under development.
No single national standard exists. Multi-state healthcare organizations must track each jurisdiction independently. See our state-by-state regulatory tracker for current status.
- Does the EU AI Act apply to U.S. healthcare companies?
-
Yes, if you operate in Europe, license software to European entities, or your AI systems affect people in the EU. The EU AI Act entered into force August 1, 2024, with mandatory compliance for high-risk AI systems required by August 2, 2026.
Medical AI devices classified under EU MDR Class IIa/IIb/III or IVDR Class A-D are categorized as high-risk AI with full compliance obligations: conformity assessments, technical documentation, human oversight measures, quality management systems, and registration in the EU AI database. U.S. health IT vendors selling to European hospital or health plan customers — even without a European office — face extraterritorial application if their AI outputs affect EU residents. The extraterritorial reach is comparable to GDPR: if you process data about EU residents, you are subject to the regulation regardless of where your servers are located.
Who Needs a Healthcare AI Governance Program
- Which healthcare organizations are required to have an AI governance program?
-
Any organization deploying AI in high-stakes healthcare decisions needs a formal AI governance program. Requirements by organization type:
- Health plans: CMS scrutiny of AI in prior authorization; Colorado HB 1139 prohibition on AI-only coverage denials; URAC health plan accreditation quality and equity standards apply to AI-assisted utilization management.
- Hospitals and health systems: FDA SaMD compliance for AI-enabled clinical tools; ONC HTI-1 transparency requirements for EHR-integrated decision support; algorithmic bias audit obligations under Colorado and California law.
- Health IT vendors: ONC HTI-1 certification requires source attribute documentation for predictive DSI features. FDA 510(k) or De Novo required for AI-enabled software features that meet SaMD definition.
- Specialty pharmacies: AI-assisted dispensing workflow governance required within existing URAC/NABP accreditation quality frameworks.
- Behavioral health providers (Texas): SB 1188 requires licensed practitioner review of all AI-generated clinical content before any clinical decision.
- Medical device manufacturers: FDA clearance or authorization required for AI/ML-enabled devices; PCCP required for adaptive algorithms.
- Does the FDA regulate all AI used in healthcare or only clinical AI?
-
FDA regulates AI that meets the definition of a medical device under the Federal Food, Drug, and Cosmetic Act — meaning AI that is intended to diagnose, cure, mitigate, treat, or prevent disease. This includes clinical decision support tools that meet SaMD criteria. FDA explicitly excludes from device regulation: administrative AI (scheduling, billing, documentation), AI used for general wellness purposes, and clinical decision support that displays information for a clinician to independently review without making a specific treatment recommendation. The line between regulated and non-regulated AI is not always clear — the 2023 Clinical Decision Support guidance provides the current framework, but the FDA's position continues to evolve. When in doubt, a formal regulatory pathway analysis is the correct first step.
Required AI Governance Documentation
- What documentation must a healthcare organization maintain for AI deployments?
-
Required AI governance documentation includes:
- AI Policy and Governance Charter — establishes committee composition, decision rights, approval workflows, and escalation protocols for evaluating, approving, and monitoring all AI tools.
- Intervention Risk Management (IRM) Records — comprehensive risk analyses for each algorithm covering validity, fairness, safety, and security (FAVES criteria).
- Source Attribute Documentation (ONC HTI-1) — transparency logs for predictive DSIs: training data demographics, exclusion criteria, known biases, intended use parameters — accessible to clinical end-users.
- Predetermined Change Control Plans (PCCP) — FDA-mandated protocols for autonomous algorithm updates. 10% of 2025 AI/ML device clearances included an authorized PCCP.
- Algorithmic Bias and Equity Impact Assessments — statistical audits demonstrating algorithm performance across demographic subgroups.
- Incident Response and Breach Notification Playbooks — AI-specific protocols compliant with HIPAA breach notification rules, covering hallucinations and data poisoning scenarios.
- AI Vendor Business Associate Agreements — updated to specifically address PHI use in model training, subprocessor obligations, and AI-related breach notification.
- Software Bill of Materials (SBOM) — component inventory for AI/ML systems in clinical workflows, required for cybersecurity risk management under NIST and HITRUST standards.
- What is a Predetermined Change Control Plan (PCCP) and when is it required?
-
A Predetermined Change Control Plan is an FDA-approved protocol that specifies in advance how an AI/ML-enabled medical device may autonomously update its algorithm without requiring a new 510(k) or PMA submission for each update. The PCCP describes: the types of modifications planned (performance improvements, dataset expansions), the methods used to implement and validate changes, and the controls to ensure modifications remain within safe and effective parameters.
FDA finalized PCCP guidance in 2025. Of 295 AI/ML devices cleared in 2025, 30 (approximately 10%) included an authorized PCCP. Organizations deploying devices with adaptive algorithms — those that retrain on new clinical data post-deployment — without an authorized PCCP may be making unapproved modifications to an FDA-cleared device, which is a significant regulatory violation. Applying for a PCCP as part of the initial 510(k) submission is the correct approach for any adaptive algorithm.
- What are the HIPAA privacy implications of training AI on patient data?
-
OCR has signaled that training AI models on PHI implicates several HIPAA requirements:
- Minimum Necessary Standard: Covered entities must limit PHI used for AI training to the minimum necessary to accomplish the purpose — bulk PHI exports for training purposes may violate this standard.
- Business Associate Agreements: Must specifically address what PHI AI vendors may use for training, how it must be de-identified or limited, subprocessor obligations, and breach notification protocols specific to AI training environments.
- De-identification requirements: The Privacy Rule's Safe Harbor and Expert Determination methods govern whether AI training datasets qualify as de-identified. "Anonymized" training data that can be re-identified through model outputs is not compliant de-identification.
- Research exception: PHI used for AI development may qualify for the research exception under certain conditions — but this requires an IRB approval or waiver process, not just an internal determination.
- How do you build an AI governance committee in a health system?
-
A functional AI governance committee requires four core roles using the PPTO framework: (1) Technical AI Lead / Chief Medical Information Officer (CMIO) — currently present on 45% of AI committees; provides technical evaluation of algorithm validity and performance. (2) Compliance and Privacy Manager — HIPAA, NIST AI RMF, and state law expertise; responsible for regulatory mapping and documentation. (3) Clinical AI Specialist — monitors algorithm accuracy, hallucinations, and real-world drift post-deployment. (4) AI Ethics Officer — equity impact assessments; currently present on only 25% of committees despite the 75% gap this creates in bias and equity oversight.
Committee effectiveness requires documented authority: approval is required before any AI tool enters clinical or coverage workflows, and the committee has standing to reject or suspend tools. Without formal approval authority, the committee is advisory only and does not satisfy regulatory pre-implementation approval requirements.
Top Compliance Risks in Healthcare AI
- What are the top compliance failures in healthcare AI deployments?
-
The most common deficiencies IHS identifies in healthcare AI governance reviews, ranked by frequency and regulatory exposure:
- No centralized AI inventory. Over 90% of healthcare organizations lack automated AI product monitoring; 51% rely on ad hoc discovery and 51% on vendor release notes. You cannot govern what you haven't inventoried.
- Black box clinical algorithms. ONC HTI-1 requires source attribute documentation for predictive DSIs in certified EHRs — most organizations have no process for generating this documentation.
- No formal pre-implementation approval. 59% of healthcare organizations deploy AI without a documented committee approval gateway. This is the single most frequently cited deficiency in AI governance reviews.
- Inadequate FDA performance validation. 510(k) submissions with non-diverse or statistically insufficient retrospective data fail to demonstrate substantial equivalence through clinical validation.
- No post-market drift monitoring. Static deployment without real-time performance tracking or PCCP documentation exposes organizations to both FDA non-compliance and clinical risk from model degradation.
- Ethics blind spots. Ethics or bioethics professionals are absent from 75% of governance committees, creating systematic gaps in equity and bias oversight.
- Legacy AI vendor BAAs. Agreements executed before AI capabilities were added to vendor platforms typically don't address PHI use in model training or AI-specific breach scenarios.
- What are the bias and health equity risks in healthcare AI?
-
Algorithmic bias in healthcare AI can manifest as differential diagnostic accuracy, differential treatment recommendations, or differential resource allocation across demographic subgroups — resulting in worse care outcomes for historically marginalized populations. The risk is not hypothetical: FDA has issued guidance on the need for diverse representation in AI training datasets, OCR has authority to investigate algorithmic discrimination under HIPAA's non-discrimination provisions, and CMS scrutinizes health equity outcomes in managed care contracting.
The governance gap is structural: ethics or bioethics professionals — the roles most likely to identify and address equity concerns — are absent from 75% of healthcare AI governance committees. Colorado SB24-205 mandates algorithmic impact assessments that must evaluate differential impacts across demographic groups. Organizations without documented bias auditing have no defense against regulatory scrutiny or civil rights complaints.
- What are the risks of AI hallucination in clinical decision support tools?
-
AI hallucination — the generation of plausible but factually incorrect outputs — in clinical decision support creates direct patient safety risk and significant regulatory exposure. From a governance perspective: organizations must have documented incident response protocols specifically addressing AI hallucinations, including detection mechanisms, reporting pathways, and clinical override procedures. Texas SB 1188 partially addresses this by requiring licensed practitioner personal review of all AI-generated clinical content before any clinical decision. HIPAA breach notification may be triggered if a hallucination event results in unauthorized disclosure of PHI or causes harm to a patient. Under California AB 316, the organization — not the AI vendor — bears liability for clinical decisions made based on hallucinated AI output. Documented human-in-the-loop oversight requirements are the primary mitigation mechanism.
Costs, Engagement, and IHS Services
- How much does healthcare AI governance consulting cost?
-
Project-based AI governance program builds range from $75,000 to $250,000+ depending on organizational size, complexity, and regulatory scope. Managed service retainers for ongoing governance maintenance run $50,000 to $180,000+/month at enterprise scale. Big 4 (Accenture, Deloitte) enterprise AI transformation retainers exceed $2,000,000+. IHS serves mid-market healthcare organizations with project-based engagements without Big 4 overhead.
The cost of non-compliance provides context: FDA PMA user fees for AI-enabled medical devices are $531,163 (FY2026 standard fee, reduced to $113,750 for small businesses). HITRUST r2 assessor fees run $60,000–$150,000+. CMS RADV audit penalties for fraudulent coding can be substantial. ONC-ATL certification fees for health IT vendors: $15,000–$30,000 annually for Single Patient API, $7,000–$28,000 for Bulk Data API. A proactive governance program is materially less expensive than reactive remediation after an adverse finding.
- How does healthcare AI governance connect to URAC or ACHC accreditation?
-
URAC and ACHC accreditation standards require quality management, health equity, and utilization management infrastructure that directly overlaps with AI governance requirements. For URAC-accredited health plans: health equity and quality standards already require documented processes for identifying and mitigating disparate outcomes — algorithmic bias auditing extends that existing framework to AI-assisted coverage decisions without creating a separate compliance effort. For URAC-accredited specialty pharmacies: AI-assisted dispensing workflow governance fits within existing clinical quality oversight requirements. IHS positions AI governance as an extension of your existing accreditation program, reducing total compliance cost and leveraging documentation infrastructure already in place.
Ready to Build Your AI Governance Program?
IHS provides operational AI governance consulting for healthcare organizations navigating FDA, ONC, CMS, and state AI compliance requirements. Start with a gap assessment to understand your regulatory exposure and what it will take to close the gaps.
Request a Gap Assessment