AI Regulation in Healthcare: Federal & State Requirements
Last updated: April 2026
The regulatory landscape for healthcare AI is converging fast. Multiple federal agencies and a growing number of states have enacted or are enacting AI-specific requirements affecting health plans, hospitals, health IT vendors, and medical device manufacturers. This tracker covers current status, effective dates, key requirements, and compliance implications — organized by jurisdiction.
Last updated: April 2026 | Request a compliance assessment →
Federal Regulations
Five federal agencies have active healthcare AI regulatory programs. No single federal AI law governs all healthcare AI — compliance requires mapping your organization's specific AI deployments to each applicable agency's requirements.
AI/ML-Enabled Medical Devices (Software as a Medical Device)
Key Requirements
- Market authorization required for AI/ML devices meeting the SaMD definition — either 510(k) premarket notification (approximately 97% of AI/ML devices), De Novo classification, or PMA (highest risk). Cumulative cleared/authorized AI/ML devices: approximately 1,200 as of 2025; 295 cleared in 2025 alone.
- Labeling requirements: Draft guidance (January 2025) specifies required labeling elements for AI/ML-enabled devices — including algorithm description, training data characteristics, validation performance data, known limitations, and human oversight requirements.
- Predetermined Change Control Plan (PCCP): Required for adaptive algorithms that update post-deployment. The PCCP defines in advance how the algorithm may change within pre-specified safety parameters without requiring a new submission. 30 devices (10% of 2025 clearances) included authorized PCCPs. Organizations deploying adaptive algorithms without an authorized PCCP may be making unapproved device modifications.
- Post-market surveillance: MDR (Medical Device Reporting) obligations apply to AI/ML devices — malfunctions and adverse events including AI hallucinations or performance failures must be reported.
- Average review time: 150 days for AI/ML device clearance. Documentation quality directly affects timeline.
Impact on Health Plans
Health plans that license AI-enabled clinical decision support tools from vendors bear secondary exposure — if the vendor's AI tool requires FDA clearance and lacks it, the health plan's use of that tool in clinical workflows creates regulatory exposure. Vendor AI governance due diligence should include verification of FDA authorization status for any AI used in clinical decision-making.
FY2026 User Fees
- PMA (full application, standard): $531,163
- PMA (small business): $113,750
- 510(k): significantly lower than PMA; specific amounts published in annual Federal Register notice
HTI-1 Final Rule — Algorithm Transparency for Predictive Decision Support Interventions
Key Requirements — Section (b)(11) Source Attributes
Certified health IT that includes predictive Decision Support Interventions must provide clinical end-users with source attribute information for each predictive DSI. Required source attributes include:
- Identity of the intervention developer
- Funding sources used to develop the intervention
- Intervention description and intended use
- Training data characteristics: demographic breakdown, exclusion criteria, known limitations
- Intervention output and the rationale for how the output was determined
- Information about how the intervention performed in testing environments, including information on fairness, bias, and equity
Impact on Health Systems
Health systems using certified EHRs with predictive clinical decision support features must verify that their EHR vendor has configured source attribute documentation for all predictive DSI features. If the vendor has not implemented HTI-1 transparency requirements, the health system's certified EHR is out of compliance — which can affect Meaningful Use attestation and CMS incentive payment eligibility.
Compliance Action Required
- Inventory all predictive DSIs deployed in certified EHR environment
- Verify vendor has implemented Section (b)(11) source attribute configuration
- Ensure source attributes are accessible to clinical end-users at point of care
- Document compliance for audit purposes
AI in Coverage Determinations & RADV Audit Enforcement
Key Requirements
- Prior authorization transparency: CMS Interoperability and Prior Authorization rules (CMS-0057-F) require health plans to provide specific reasons for prior authorization denials — AI-generated denial recommendations must be traceable, auditable, and explainable to members and providers.
- RADV audit exposure: CMS is deploying AI-driven analytics to complete Payment Year 2018–2024 RADV audit backlog. Organizations that used AI-assisted coding that inflated diagnosis code capture face significant recoupment exposure. AI-assisted risk adjustment coding programs must have documented governance and human review protocols.
- Health equity obligations: CMS contracts with Medicare Advantage and Medicaid managed care organizations include health equity requirements. AI-assisted utilization management tools must be audited for differential impacts on protected populations.
Impact on Health Plans
Medicare Advantage plans using AI in coverage or prior authorization workflows face the highest CMS scrutiny. Documented human oversight of AI coverage recommendations, audit trails for AI-influenced decisions, and bias auditing for differential denial rates across demographic groups are the core compliance requirements. Colorado HB 1139 (state law) prohibits AI-only coverage denials — health plans operating in Colorado must ensure human clinical review before any AI-generated denial recommendation becomes a final coverage decision.
PHI Use in AI Model Training & AI-Related Breach Notification
Key Requirements for AI
- Minimum Necessary Standard: PHI used for AI model training must be limited to the minimum necessary to accomplish the training purpose. Bulk PHI exports for vendor model training without minimum necessary analysis are potentially non-compliant.
- Business Associate Agreements: Must specifically address: what PHI the AI vendor may use for training purposes, how it must be de-identified or limited, subprocessor obligations for AI infrastructure providers, and breach notification protocols for AI training environments.
- De-identification: AI training datasets must satisfy Safe Harbor or Expert Determination de-identification standards. "Anonymized" datasets that can be re-identified through model outputs or inference attacks do not qualify as de-identified under HIPAA.
- Breach notification: AI-related incidents — including data poisoning attacks, unauthorized model extraction, or AI hallucinations that result in unauthorized PHI disclosure — trigger HIPAA breach notification obligations.
- Security Rule: AI systems that process, store, or transmit PHI are subject to HIPAA Security Rule administrative, physical, and technical safeguard requirements, including Software Bill of Materials management for AI/ML components.
AI Risk Management Framework (AI RMF 1.0)
Framework Structure
The NIST AI RMF 1.0 organizes AI risk management across four core functions:
- Govern: Establish organizational culture, policies, and accountability structures for AI risk management
- Map: Categorize AI systems, identify context and risks associated with each deployment
- Measure: Analyze and assess AI risks using appropriate metrics and methods
- Manage: Prioritize and treat AI risks; document residual risks
Healthcare Adoption Gap
84% of healthcare organizations have established AI governance committees, but only 12% have implemented a formal AI governance framework such as NIST AI RMF (Censinet/CHIME Foundation, 2025). This gap — between committee existence and framework implementation — is where regulatory exposure lives. A committee that meets but produces no NIST-mapped documentation does not satisfy regulatory requirements that reference the AI RMF.
State AI Laws Affecting Healthcare
State AI legislation is accelerating. The following laws are currently enacted or enacted-and-effective. Additional states have legislation under active consideration. Multi-state healthcare organizations must track each jurisdiction's requirements independently — no federal preemption currently applies.
| State / Law | Status | Effective Date | Key Healthcare Requirement | Who Is Affected |
|---|---|---|---|---|
| Colorado SB24-205 (AI Act) |
Enacted, Phased Enactment | Full enactment: June 2026 | Algorithmic impact assessments and risk management policies for high-risk AI in healthcare decisions affecting Colorado residents | Health plans, health systems, health IT vendors, any organization making consequential AI-assisted decisions about Colorado residents |
| Colorado HB 1139 | Active | Enacted 2024 | Prohibits health insurance coverage denials based solely on AI recommendations; human clinical review required before final denial | Health insurers operating in Colorado; Medicare Advantage plans serving Colorado members |
| California AB 489 | Active | January 1, 2026 | Prohibits AI chatbots from presenting as licensed healthcare professionals; disclosure required when AI is used in patient interactions | Telehealth platforms, patient-facing AI chatbots, behavioral health digital tools operating in California |
| California AB 316 | Active | 2026 | Removes AI-acted-autonomously as a liability defense; healthcare organizations bear full liability for AI-influenced clinical decisions | All healthcare organizations using AI in clinical decision-making serving California patients |
| California AB 2013 | Active | 2026 | Requires disclosure of training data used to develop AI systems deployed in high-stakes contexts including healthcare | AI developers and health IT vendors deploying AI systems in California healthcare settings |
| Texas SB 1188 | Active | September 2025 (clinical review) January 1, 2026 (EHR storage) |
Licensed practitioner must personally review all AI-generated clinical content before clinical decision; patient disclosure required; U.S.-based EHR storage required | All healthcare providers licensed in Texas using AI in clinical workflows |
| Utah SB 149 / SB 226 | Active | 2024–2025 | AI disclosure requirements for consumer-facing AI interactions; prohibitions on AI impersonating licensed professionals | Healthcare organizations with patient-facing AI tools; telehealth platforms; digital health applications serving Utah residents |
| Utah HB 452 | Active | 2025 | AI liability framework applying to AI-assisted decisions; extends to healthcare AI contexts | Healthcare organizations using AI in clinical and coverage decisions affecting Utah residents |
| Illinois AI Laws | Active | Various (2024–2025) | Employment AI anti-discrimination; consumer-facing AI disclosure; healthcare AI use in hiring and administrative decisions | Healthcare organizations in Illinois using AI in employment decisions and patient-facing administrative workflows |
| Nevada (Emerging) | Monitor | Pending | AI disclosure legislation under active development; expected to address healthcare AI deployment transparency | Healthcare organizations serving Nevada residents |
Note: State AI legislation is moving quickly. Multiple additional states have bills in committee as of April 2026. IHS monitors state AI legislation affecting healthcare on an ongoing basis. Contact us for the most current state-specific compliance assessment for your organization.
International: EU AI Act
EU AI Act — High-Risk Medical AI Mandatory Compliance
Medical AI Classification as High-Risk
The EU AI Act classifies AI systems used as safety components in medical devices or as medical devices themselves — under EU MDR Class IIa/IIb/III or IVDR Class A-D — as high-risk AI. High-risk AI triggers the full compliance framework:
- Risk management system: Documented AI risk management processes aligned with ISO 31000 and ISO 14971 (medical device risk management)
- Data governance: Training, validation, and testing data must meet quality criteria; data governance documentation required
- Technical documentation: Comprehensive technical file including system description, design specifications, performance metrics, and validation data
- Transparency and provision of information: Instructions for use disclosing capabilities, limitations, and human oversight requirements
- Human oversight measures: Documented mechanisms enabling human monitoring and intervention
- Accuracy, robustness, and cybersecurity: Validation against applicable standards; cybersecurity requirements aligned with EU Medical Device Regulation
- Conformity assessment: Third-party conformity assessment by a notified body or self-assessment depending on classification
- EU database registration: Registration in the EU AI Act database before placing the system on the EU market
- Quality management system: ISO 13485-aligned QMS required for medical AI
Extraterritorial Application for U.S. Organizations
The EU AI Act applies extraterritorially in a manner comparable to GDPR. U.S. healthcare organizations face EU AI Act obligations if: (1) they operate healthcare facilities or services in EU member states; (2) they license health IT or AI-enabled medical device software to European healthcare providers, health plans, or patients; or (3) their AI systems produce outputs that affect EU residents, regardless of where the AI system is hosted. U.S. health IT vendors selling EHR or clinical decision support software to European hospital customers — even without a European corporate presence — are subject to the EU AI Act requirements for their AI-enabled features.
Penalties
Violations of EU AI Act requirements for high-risk AI systems: up to €30 million or 6% of global annual turnover, whichever is higher. For prohibited AI practices: up to €35 million or 7% of global annual turnover.
Upcoming Regulatory Changes
The following regulatory changes are anticipated or confirmed for the remainder of 2026 and beyond. Healthcare organizations should begin compliance preparation now — most require 3–6 months of documentation development to achieve audit-ready status.
Colorado AI Act (SB24-205) — Full Enactment
Full enactment of Colorado's AI Act requiring algorithmic impact assessments and risk management policies for high-risk AI in healthcare decisions affecting Colorado residents. Organizations operating in Colorado or serving Colorado patients that have not completed algorithmic impact assessments are non-compliant as of enactment date. Colorado HB 1139 prohibition on AI-only coverage denials already active.
Compliance action: Complete algorithmic impact assessments for all high-risk AI in healthcare decision workflows. Document risk management policies. Establish human review protocol for coverage determinations.
EU AI Act — Mandatory High-Risk AI Compliance Deadline
All high-risk AI systems — including AI medical devices classified under EU MDR/IVDR — must be fully compliant with EU AI Act requirements. Conformity assessments, technical documentation, human oversight measures, and EU database registration must be complete. New high-risk AI systems placed on the EU market after this date without compliance are in violation.
Compliance action: Classify AI systems under EU AI Act risk tiers. Complete conformity assessments. Prepare technical documentation. Register high-risk AI systems in EU database before August 2, 2026.
CMS RADV Audit Backlog Completion — Payment Year 2018–2024
CMS is targeting completion of the Payment Year 2018–2024 RADV audit backlog using AI-driven analytics to detect fraudulent diagnosis code inflation. Medicare Advantage organizations that used AI-assisted coding programs that inflated HCC risk scores face recoupment exposure. Organizations without documented governance for AI-assisted coding programs and without human review protocols for AI-generated coding recommendations are at highest risk.
Compliance action: Review AI-assisted coding governance documentation. Verify human review protocols for AI-generated coding recommendations. Prepare audit response documentation.
State AI Legislative Activity — Additional States Expected
AI legislation is under active consideration in multiple additional states including New York, Massachusetts, Maryland, and Washington. The legislative trend is toward: algorithmic impact assessment requirements, AI disclosure to patients and members, prohibition on AI-only high-stakes decisions, and AI vendor accountability requirements. Organizations should monitor state legislation in every state where they operate or serve patients.
Compliance action: Establish state AI legislation monitoring process. Build governance documentation flexible enough to satisfy varying state requirements without needing to rebuild from scratch for each jurisdiction.
Regulatory Tracker FAQ
- What is the current status of ONC HTI-1 enforcement for health IT vendors?
-
ONC HTI-1 Final Rule was published December 13, 2023. USCDI Version 3 became the baseline standard January 1, 2026. ONC extended enforcement discretion for certain certification criteria through February 28, 2026 — after that date, full enforcement applies.
Health IT vendors with ONC-certified technology must provide source attribute documentation for all predictive Decision Support Interventions under Section (b)(11): training data demographics, exclusion criteria, known biases, and intended use parameters accessible to clinical end-users. Vendors who have not configured this documentation are out of compliance with their ONC certification requirements.
- When does the EU AI Act require compliance for medical AI?
-
The EU AI Act entered into force August 1, 2024. Mandatory compliance for high-risk AI systems — including medical AI devices classified under EU MDR Class IIa/IIb/III or IVDR Class A-D — is required by August 2, 2026.
U.S. healthcare organizations operating in Europe or licensing software to European entities are subject to these requirements regardless of where the organization is headquartered. The extraterritorial reach is comparable to GDPR.
- What does Colorado's AI Act require from healthcare organizations?
-
Colorado SB24-205 fully enacts June 2026. It requires algorithmic impact assessments and risk management policies for high-risk AI in healthcare decisions — covering AI used in insurance coverage determinations, clinical recommendations, and other consequential decisions affecting Colorado residents.
Colorado HB 1139 (companion law, already active) prohibits health insurance coverage denials based solely on AI recommendations without human clinical review. Together, these laws create the most comprehensive state AI governance framework currently applicable to healthcare in the U.S.
- What does Texas SB 1188 require from healthcare providers?
-
Texas SB 1188 took effect September 2025. Key requirements:
- A licensed practitioner must personally review all AI-generated clinical content before any clinical decision is made based on that content
- Healthcare organizations must disclose to patients when AI was used in their care
- EHR storage for Texas patients must be U.S.-based (effective January 1, 2026)
The law applies to any healthcare provider licensed in Texas using AI in clinical decision-making workflows — including providers using AI through their EHR vendor's built-in tools. The vendor's AI feature does not insulate the provider from compliance obligations.
- What did California AB 316 change about AI liability in healthcare?
-
California AB 316, effective 2026, removes the AI-acted-autonomously defense in healthcare liability cases. Prior to AB 316, healthcare organizations could argue that an AI system's autonomous action — rather than a human decision — limited the organization's liability for adverse outcomes.
AB 316 eliminates that defense: the organization is fully liable for clinical decisions made with AI involvement, regardless of the degree of autonomous AI action. This makes documented human oversight protocols and AI governance a legal prerequisite. Organizations without documented governance — specifically without documented human review requirements for AI-influenced clinical decisions — have no liability defense for AI-related adverse outcomes affecting California patients.
Need a Compliance Assessment Against These Requirements?
IHS maps your organization's current AI deployments against each applicable regulatory requirement — federal, state, and international — and produces a prioritized remediation roadmap with regulatory deadlines. Start with a gap assessment before the next deadline passes.