AI Governance Frameworks for Healthcare — Complete Guide
Last updated: April 2026
Five federal agencies, six major frameworks, and a patchwork of state laws now govern how healthcare organizations design, deploy, and monitor AI. This guide tells you which apply to your organization, what each requires, and how to build a compliance program that satisfies all of them.
Why Healthcare AI Governance Is Different
Healthcare AI governance is more complex than enterprise AI governance because clinical algorithms directly affect patient safety, trigger federal medical device law, and intersect with HIPAA privacy requirements — all at once. A single AI-assisted prior authorization tool may simultaneously be subject to FDA marketing submission requirements, ONC HTI-1 transparency mandates, CMS coverage determination rules, and state-level disclosure laws. General AI governance frameworks like ISO 42001 were not designed for this regulatory stack.
The governance gap is real and documented: 84% of healthcare organizations have established AI governance committees, but only 12% have implemented a formal AI governance framework such as the NIST AI RMF. Meanwhile, 59% lack a formal documented process requiring governance approval before AI implementation — which means the committee exists on paper while algorithms move into clinical workflows unvetted. (Source: Censinet / CHIME Foundation, AI Adoption Survey, 2025.)
The market reflects the urgency. The global Clinical AI Model Governance sub-market was valued at $1.77 billion in 2025 and is projected to reach $71.12 billion by 2036 — an implied CAGR exceeding 40%. (Source: Future Market Insights, Clinical AI Model Governance Market Report, 2026.) At least 3,000 major U.S. healthcare entities are actively formalizing AI governance postures ahead of 2026 regulatory deadlines.
The Major Healthcare AI Governance Frameworks
Five federal frameworks and two voluntary certification standards form the core of U.S. healthcare AI governance. Each addresses a different risk domain; most organizations need to satisfy several simultaneously.
NIST AI Risk Management Framework (AI RMF 1.0)
The NIST AI RMF is the broadest and most widely adopted framework — a voluntary, sector-neutral standard that provides the foundational vocabulary and process structure for AI governance programs across healthcare. Published by the National Institute of Standards and Technology, the AI RMF 1.0 organizes risk management around four core functions: Govern, Map, Measure, and Manage. Healthcare organizations use it to build AI governance committees, define risk tiers for clinical algorithms, and document accountability structures.
Critically, the NIST AI RMF is not an accreditation standard — there is no NIST certification and no external audit. Its value is as an operational scaffold: it gives organizations a defensible, regulator-recognized framework for showing their work. When FDA, ONC, or OCR scrutinize an AI deployment, a documented NIST AI RMF implementation demonstrates that a structured governance process was followed.
Who needs it: Any healthcare organization deploying AI in clinical, administrative, or operational workflows. FDA references AI RMF alignment in its guidance; HITRUST r2 mappings to AI RMF are available. NIST documentation is free; implementation consulting is the cost.
FDA AI/ML Medical Device Guidance (SaMD Framework)
FDA regulates AI that meets the definition of a medical device — specifically Software as a Medical Device (SaMD): software intended to diagnose, treat, cure, mitigate, or prevent disease or other conditions. As of 2025, approximately 1,200 AI/ML-enabled medical devices have been cleared or authorized, with 295 devices cleared in 2025 alone. Roughly 97% entered market via the 510(k) premarket notification pathway. (Source: Innolitics, Year in Review: AI/ML Medical Device Clearances, 2025.)
Key FDA requirements for AI-enabled devices include: (1) Performance data demonstrating substantial equivalence to a predicate device; (2) Labeling disclosing training data demographics, limitations, and intended use; (3) A Predetermined Change Control Plan (PCCP) for devices designed to autonomously adapt post-deployment — only 30 devices (10% of 2025 clearances) have authorized PCCPs, making this one of the most significant compliance gaps in the field; (4) Post-market real-world performance monitoring. Average FDA total review time for AI/ML devices is 150 days. (Source: Innolitics, 2025.) FDA issued draft guidance on AI/ML-enabled medical device labeling in January 2025; PMA fees for FY2026 are $531,163 standard ($113,750 small business).
Who needs it: Health IT vendors, EHR developers, medical device manufacturers, and any health system that develops or significantly modifies AI tools used in clinical diagnosis or treatment decisions.
ONC HTI-1 Final Rule
The ONC Health Data, Technology, and Interoperability Rule (HTI-1), published December 13, 2023, establishes the first mandatory federal transparency requirements for AI and predictive algorithms embedded in certified EHR technology. Under HTI-1 Section (b)(11), developers of certified health IT must make source attribute information available to clinical users for all Decision Support Interventions (DSIs) — including: training data demographics, exclusion criteria, known limitations, and the funding sources behind algorithm development.
The USCDI Version 3 baseline was effective January 1, 2026. ONC extended enforcement discretion for certain HTI-1 certification criteria to February 28, 2026. After that window, EHR developers and health IT vendors face formal certification consequences for non-compliant deployments of predictive DSIs.
Who needs it: Health IT vendors and EHR developers with ONC-certified products. Health systems that use certified EHR technology and deploy predictive clinical decision support tools must ensure their vendors are compliant and that source attributes are accessible to clinical staff.
State AI Laws — Colorado, California, Texas, and the Patchwork
No single federal AI law governs healthcare AI in the United States, which means organizations must navigate a growing patchwork of state requirements with varying effective dates and enforcement mechanisms.
- Colorado AI Act (SB24-205): Full enactment June 2026. Requires algorithmic impact assessments and risk management policies for high-risk AI used in consequential healthcare decisions. Colorado HB 1139 bans AI-only insurance coverage denials — a direct constraint on prior authorization automation.
- California AB 489 (effective January 1, 2026): Prohibits AI chatbots from presenting as licensed healthcare professionals. Applies to any chatbot deployed in patient-facing healthcare contexts.
- California AB 316 (effective 2026): Removes the "AI acted autonomously" liability defense, shifting full liability for AI-caused harm to the healthcare organization that deployed it.
- Texas SB 1188 (effective September 2025): Requires a licensed practitioner to personally review all AI-generated clinical content before decisions are made. Requires patient disclosure of AI use. Mandates U.S.-based EHR storage effective January 1, 2026.
- Illinois, Utah, Nevada: Additional disclosure, transparency, and algorithmic accountability requirements affecting healthcare AI deployments — details on the Regulatory Tracker.
- EU AI Act (effective August 1, 2024; mandatory compliance August 2, 2026): Classifies medical AI aligned with MDR Class IIa/IIb/III and IVDR Class A-D as High-Risk AI. U.S. companies operating in Europe or licensing AI to European entities must comply.
The highest regulatory density currently sits in Colorado, California, and Texas. Organizations operating nationally should assume state requirements will continue to expand and structure their governance programs to accommodate localized compliance interventions.
URAC and NCQA AI Standards
URAC and NCQA — the two leading healthcare accreditation bodies for health plans and managed care organizations — are actively incorporating AI governance into their accreditation standards. Health plans and managed care organizations seeking or maintaining URAC accreditation should expect AI governance documentation requirements to become explicit accreditation criteria as standards evolve.
IHS is uniquely positioned at this intersection: as the only URAC-certified accreditation consulting firm in the United States, IHS brings operational accreditation expertise to AI governance engagements that no other consulting firm can offer. For health plans and managed care organizations, AI governance is not a separate compliance workstream — it is an extension of existing URAC and NCQA compliance obligations. See the AI Governance Consulting service page for how IHS structures these integrated engagements.
HITRUST r2 and HAIGS
HITRUST r2 (gold standard) and the Healthcare AI Governance Standard (HAIGS 2024) provide the most rigorous third-party certification pathways for healthcare AI governance. HITRUST offers NIST AI RMF mapping and a structured audit cycle (r2 renewal every two years). Organizations with HITRUST-certified environments remained breach-free at a rate of 99.41% throughout 2024. (Source: HITRUST, 2025 Trust Report.) HITRUST assessor fees range from $20,000–$40,000 (entry-level e1) to $60,000–$150,000+ (r2 gold standard).
Framework Comparison: Scope, Applicability, Enforcement, Timeline
Use this table to identify which frameworks apply to your organization and their current compliance deadlines.
| Framework | Governing Body | Scope | Who It Applies To | Enforcement Mechanism | Key Deadline | Mandatory? |
|---|---|---|---|---|---|---|
| NIST AI RMF 1.0 | NIST (federal) | End-to-end AI risk governance (Govern, Map, Measure, Manage) | Any organization deploying AI; referenced by FDA and regulators as best-practice standard | Voluntary — no certification, but regulatory safe harbor value | No deadline; recommended now | No (voluntary; referenced in FDA/ONC guidance) |
| FDA SaMD / AI-ML Guidance | FDA CDRH | AI/ML software meeting medical device definition; clinical diagnosis, treatment, prevention | Medical device manufacturers, health IT vendors with clinical AI, health systems developing AI tools | Mandatory — 510(k), De Novo, or PMA submission required; FDA enforcement; Warning Letters | January 2025 draft guidance; PCCP requirements ongoing | Yes — for AI meeting SaMD definition |
| ONC HTI-1 Final Rule | ONC (HHS) | Transparency for predictive Decision Support Interventions in certified EHR technology | ONC-certified health IT developers; health systems using certified EHR with predictive DSIs | Mandatory — loss of ONC certification; CMS reimbursement implications | February 28, 2026 (enforcement discretion window closes) | Yes — for certified health IT developers |
| Colorado AI Act (SB24-205) | Colorado AG | High-risk AI in consequential decisions; algorithmic impact assessments | Organizations deploying high-risk AI affecting Colorado residents in healthcare, insurance, employment | State AG enforcement; civil liability | June 2026 | Yes — for covered organizations in Colorado |
| EU AI Act (High-Risk AI) | EU AI Office | High-risk AI including medical AI (MDR Class IIa/IIb/III, IVDR Class A-D) | U.S. companies operating in EU or licensing AI to EU entities; global health IT vendors | Mandatory — fines up to €30M or 6% of global annual turnover | August 2, 2026 | Yes — extraterritorial for covered organizations |
| HITRUST r2 | HITRUST Alliance | Security, privacy, and AI governance certification; NIST AI RMF mapping available | Health plans, health systems, health IT vendors seeking third-party validation | Voluntary certification — contractual requirements from payers and health systems | Ongoing; 2-year renewal cycle | No (voluntary; increasingly required by contract) |
| HAIGS 2024 | Healthcare AI Governance Standard (industry) | End-to-end healthcare AI governance certification; 3-year audit cycle | Health systems and health IT organizations seeking formal AI governance certification | Voluntary certification | Ongoing | No (voluntary) |
| URAC / NCQA AI Standards | URAC / NCQA | AI governance as component of health plan and managed care accreditation | Health plans, managed care organizations, specialty pharmacy organizations | Accreditation-based — non-compliance risks accreditation loss; state licensing implications | Evolving; check current standards versions | Yes — for organizations seeking or maintaining accreditation |
Healthcare AI Governance Implementation Roadmap
A full AI governance program — from initial gap assessment to first certification — takes 6 to 12 months. Organizations facing multiple simultaneous deadlines (ONC HTI-1 enforcement through February 2026, Colorado AI Act June 2026, EU AI Act August 2026) should begin immediately to protect against stacking penalties and failed certification windows.
Phase 1: Planning, Scoping, and Gap Assessment (2–6 weeks)
Identify every AI tool in clinical and operational use — including shadow AI that entered workflows without formal approval. The Censinet/CHIME survey found that more than 90% of healthcare organizations lack automated AI product monitoring; 51% rely on ad hoc discovery. A structured shadow AI scan using automated tooling is the non-negotiable first step.
Deliverables: Complete AI inventory; regulatory mapping (which tools trigger FDA, ONC, CMS, state requirements); gap analysis against NIST AI RMF core functions; prioritized remediation roadmap.
Phase 2: Remediation and Policy Development (4–12 weeks)
The most labor-intensive phase. Build the governance infrastructure that regulatory frameworks require: AI governance committee with appropriate composition (note: ethics/bioethics professionals are absent from 75% of current healthcare AI governance committees, per Censinet/CHIME 2025), AI policies and charters, updated Business Associate Agreements covering AI training data and PHI handling, ONC HTI-1 source attribute documentation, bias and health equity impact assessments, and incident response playbooks for AI-related failures.
Key documents required: AI Governance Charter; Intervention Risk Management (IRM) Records; Source Attribute Documentation; PCCP (for FDA-regulated devices); Algorithmic Bias Assessments; AI Vendor BAAs; Software Bill of Materials (SBOM).
Phase 3: Mock Survey / Pre-Assessment (3–6 weeks)
Internal audit against target framework (HAIGS 2024 or HITRUST r2). Identify remaining gaps before external assessors arrive. For organizations pursuing FDA clearance, this phase includes review of 510(k) or PMA documentation packages and simulation of FDA reviewer questions on training data adequacy and clinical validation methodology.
Phase 4: Validated Assessment / Formal Audit (1–4 weeks)
For HITRUST: formal assessment by a HITRUST-certified assessor. For ONC-related compliance: ONC Authorized Testing Laboratory (ONC-ATL) review. For FDA submissions: CDRH review process (average 150 days total review time). For state law compliance: legal review by counsel familiar with the specific state requirements.
Phase 5: Certification Decision and Ongoing Maintenance
Post-certification maintenance is not optional. Annual ONC real-world testing requirements. HAIGS 3-year audit cycle. HITRUST r2 renewal every 2 years. Ongoing PCCP documentation for FDA-cleared devices that self-update. State law monitoring as the regulatory patchwork continues to expand. 63% of healthcare organizations plan to implement agentic AI within 12 months — every new agentic deployment restarts Phase 1 for that tool.
How IHS Helps Healthcare Organizations Navigate AI Governance
IHS is the only URAC-certified accreditation consulting firm in the United States. That distinction matters for healthcare AI governance because no other consulting firm can deliver AI governance as a direct extension of existing URAC and NCQA accreditation compliance — which is precisely how health plans and managed care organizations need it structured.
What Makes IHS Different
The Big 4 (Accenture, Deloitte, and peers) charge $2,000,000+ per enterprise AI governance engagement and deliver strategy memos rather than operational execution. The 59% of healthcare organizations that lack a documented AI pre-implementation approval process do not need another strategy memo — they need documented workflows, configured governance committees, and completed policy libraries ready for regulatory scrutiny. That is IHS's operational lane.
Principal Thomas G. Goddard, JD, PhD brings both legal and clinical governance expertise to every engagement. IHS engagements deliver:
- Complete AI inventory and shadow AI remediation — structured scanning protocols, not ad hoc discovery
- Governance documentation library — AI charters, IRM records, source attribute logs, PCCPs, BAA rewrites, SBOM templates calibrated to your organization's specific regulatory stack
- Accreditation-integrated AI governance — for health plans and specialty pharmacies, AI governance woven into existing URAC/NCQA compliance programs, not managed as a separate workstream
- State-law compliance mapping — proactive identification of Colorado, California, Texas, and other state requirements that apply to your operations
- Algorithmic bias assessment — equity impact analyses aligned with CMS and OCR scrutiny standards, leveraging IHS's deep health plan accreditation experience
- ONC HTI-1 implementation checklists — practitioner-facing documentation for health IT teams and EHR implementation leads
Related IHS Services
- AI Governance & Algorithmic Compliance Consulting — Full Service Overview
- Healthcare AI Regulatory Tracker — State-by-State Requirements
- Healthcare AI Governance FAQ
- Healthcare AI Governance Case Study
- HITRUST Cybersecurity Consulting — for organizations pursuing HITRUST r2 certification
Frequently Asked Questions
Which federal agencies regulate AI in healthcare and what does each oversee?
Five federal agencies hold primary jurisdiction over healthcare AI, each covering a different risk domain. FDA (Center for Devices and Radiological Health) regulates AI that meets the definition of a medical device — clinical diagnosis, treatment, and prevention AI embedded in SaMD. ONC (Office of the National Coordinator for Health Information Technology) mandates transparency requirements for predictive algorithms in certified EHR technology under HTI-1. CMS (Centers for Medicare & Medicaid Services) governs AI used in coverage determinations, prior authorization, and Medicare Advantage risk adjustment — including AI-driven RADV audits of diagnosis coding. OCR (HHS Office for Civil Rights) enforces HIPAA requirements for PHI used in AI model training and monitors algorithmic bias as a health equity issue under Section 1557. FTC enforces deceptive practice rules against false claims in AI marketing. Most healthcare organizations are subject to at least three of these agencies simultaneously.
Does the FDA regulate all AI used in healthcare settings?
No — FDA regulates only AI that meets the legal definition of a medical device: software intended to diagnose, treat, cure, mitigate, or prevent disease or other conditions in a specific patient. Approximately 1,200 AI/ML-enabled medical devices have been cleared or authorized by FDA, with 97% using the 510(k) premarket notification pathway. AI used for administrative, operational, revenue cycle, or scheduling purposes generally does not require FDA marketing submission. However, the line between "clinical decision support" (potentially exempt) and "medical device software" (regulated) is fact-specific, and FDA has issued guidance clarifying that clinical recommendations intended for specific patients typically meet the device definition. Organizations should assess each AI tool individually against FDA's four-factor clinical decision support framework before assuming it is unregulated.
What is ONC HTI-1 and what AI transparency requirements does it impose?
ONC HTI-1 (Health Data, Technology, and Interoperability Rule, published December 13, 2023) is the first federal rule imposing mandatory transparency requirements on predictive algorithms embedded in certified EHR technology. Under Section (b)(11), developers of certified health IT must make "source attribute" information accessible to clinical users for every predictive Decision Support Intervention (DSI). Required source attributes include: training data demographics, exclusion criteria used in model development, known model limitations, and the funding sources behind the algorithm. The USCDI Version 3 data baseline became effective January 1, 2026; ONC extended enforcement discretion for certain criteria to February 28, 2026. After that window closes, EHR developers with non-compliant predictive DSIs risk loss of ONC certification, which carries downstream implications for CMS reimbursement eligibility. Health systems that use certified EHR products must confirm their vendors' compliance and that source attribute information is actually accessible to clinical staff at the point of care.
What is a Predetermined Change Control Plan (PCCP) for AI/ML medical devices?
A Predetermined Change Control Plan (PCCP) is an FDA-approved protocol that specifies the types of algorithm modifications a cleared medical device may make autonomously — without requiring a new 510(k) or PMA submission — as long as changes remain within predefined safety parameters documented at the time of initial clearance. PCCPs are the mechanism that allows AI/ML medical devices to adapt and retrain in the real world without triggering the full FDA review process every time the algorithm updates. As of 2025, only 30 of the 295 devices cleared that year (approximately 10%) had an authorized PCCP, making this one of the most significant compliance gaps in the field. Organizations and vendors operating adaptive AI in clinical settings without an authorized PCCP may be subjecting every significant algorithm update to a new marketing submission requirement — and may not know it. IHS assists medical device manufacturers and health systems in PCCP design, documentation, and integration with post-market drift monitoring protocols.
Does the Colorado AI Act apply to healthcare organizations?
Yes — the Colorado AI Act (SB24-205, full enactment June 2026) applies to developers and deployers of "high-risk AI systems" used in consequential decisions affecting Colorado residents, including healthcare decisions. Organizations subject to the Act must conduct algorithmic impact assessments, implement risk management policies for high-risk AI, and provide disclosure to affected individuals. Separately, Colorado HB 1139 directly bans AI-only coverage denials in insurance — a direct constraint on prior authorization automation for health plans operating in Colorado. Colorado's law is one of the most comprehensive state AI acts in the U.S. and is expected to influence other states' legislation. Healthcare organizations with Colorado operations should begin algorithmic impact assessment processes now to meet the June 2026 effective date. Organizations with multi-state operations should structure their AI governance programs to accommodate the full state regulatory patchwork, as California, Texas, Illinois, Utah, and Nevada all have active requirements.
How does the EU AI Act affect US healthcare companies?
The EU AI Act has extraterritorial reach: U.S. healthcare companies that operate in Europe, license AI software to European entities, or provide services to European healthcare organizations must comply with its High-Risk AI requirements by August 2, 2026. The Act classifies medical AI aligned with MDR Class IIa/IIb/III and IVDR Class A-D as High-Risk, triggering requirements for conformity assessments, technical documentation, transparency to users, human oversight mechanisms, and registration in the EU AI database. Non-compliance penalties reach €30 million or 6% of global annual turnover — whichever is higher. U.S. health IT vendors with any European customer relationships should treat the August 2, 2026 deadline as a hard compliance requirement. The NIST AI RMF provides a useful baseline, but EU AI Act conformity assessments require documentation that goes significantly beyond voluntary NIST alignment.
What documentation must a healthcare organization maintain for deployed AI systems?
A complete healthcare AI documentation library includes eight categories of records: (1) AI Governance Charter — formal document establishing committee composition, roles, responsibilities, and approval workflows for all AI tools; (2) Intervention Risk Management (IRM) Records — risk analyses for all predictive algorithms covering validity, fairness, safety, and security (FAVES criteria); (3) Source Attribute Documentation — ONC HTI-1-compliant transparency logs accessible to clinical end-users; (4) Predetermined Change Control Plans (PCCP) — FDA-required for adaptive AI/ML medical devices; (5) Bias and Equity Impact Assessments — statistical audits across demographic subgroups; (6) Incident Response Playbooks — protocols for AI-related failures, hallucinations, and data incidents compliant with HIPAA breach notification; (7) AI Vendor BAAs — updated Business Associate Agreements covering AI training data and PHI handling; (8) Software Bill of Materials (SBOM) — component inventory for AI/ML systems in clinical workflows. Organizations pursuing HITRUST r2 or HAIGS certification must have all eight categories complete and auditable before assessment begins.
How much does healthcare AI governance consulting cost?
Project-based MVP governance framework engagements range from $75,000 to $250,000+, depending on organizational size, number of AI tools in scope, and target certification framework. Managed services and retainers at enterprise scale run $50,000 to $180,000+ per month. Big 4 enterprise engagements (Accenture, Deloitte) typically exceed $2,000,000 per engagement. Niche technical hourly rates for healthcare data scientists and algorithmic compliance specialists run $150–$250+/hour. External certification costs add to consulting fees: HITRUST assessor fees range from $20,000–$40,000 (e1 entry) to $60,000–$150,000+ (r2 gold standard). FDA PMA fees for FY2026 are $531,163 (standard) or $113,750 (small business). The cost of non-compliance is a more useful frame for most organizations: state law penalties, FDA Warning Letters, ONC certification loss, and CMS reimbursement risk typically dwarf consulting costs. IHS structures engagements for mid-market healthcare organizations — health plans, specialty pharmacies, mid-size health systems — that need operational execution, not enterprise strategy decks.
Ready to Build Your AI Governance Program?
IHS works with health plans, specialty pharmacies, health systems, and health IT vendors to build AI governance programs that satisfy the full regulatory stack — FDA, ONC, CMS, state laws, URAC, and NCQA — in an integrated engagement, not a series of disconnected workstreams. Contact us to schedule a gap assessment.
Schedule a Gap Assessment: thomas.goddard@integralhs.com
Related pages: AI Governance Consulting | Full FAQ | Regulatory Tracker | Case Study