The most dangerous myth about medical artificial intelligence is that the decisive variable is intelligence. The decisive variable is governance: who trained the model, on what data, with what constraints, with what monitoring, and with what liability when the output is wrong. The current surge in attention to AI regulation reflects a pragmatic shift. Stakeholders have realized that clinical adoption depends less on dazzling demonstrations and more on institutional confidence that the tools can be controlled, audited, and corrected.
The FDA is signaling a lifecycle approach, not a one-time clearance mindset
For AI-enabled device software, the FDA has moved toward guidance that treats software as a continuously evolving artifact. The agency’s guidance on predetermined change control plans outlines a pathway for managing modifications to AI and machine learning-enabled device software while maintaining regulatory oversight. The framework is described in the FDA’s Predetermined Change Control Plans guidance.
This matters because AI models degrade or drift. Performance can change with new patient populations, clinical practice patterns, and data distributions. A governance model that assumes static performance is incompatible with real clinical environments. The FDA’s approach implies that safe AI must be managed like a living system, with documentation and controls that anticipate change.
The FDA’s posture on clinical decision support also shapes the boundary between regulated medical tools and broader software products. The agency’s clinical decision support software guidance clarifies when decision support crosses into device territory, emphasizing whether clinicians can independently review the basis for recommendations. Explainability becomes a compliance feature.
ONC’s HTI-1 rule formalizes transparency for predictive models in certified health IT
AI governance is also emerging through health IT certification policy. The HTI-1 final rule introduces requirements related to decision support interventions and predictive models, with an emphasis on transparency and information sharing. The rule’s policy framing appears in the Federal Register publication of HTI-1. Implementation resources are collected on the ASTP HTI-1 page.
The significance is cultural. Health IT certification has historically focused on interoperability and functional requirements. HTI-1 suggests that algorithm transparency is now part of what it means to be “certified.” That change moves governance from voluntary ethics statements into enforceable infrastructure.
NIST provides a risk framework that is increasingly treated as a baseline
In the United States, the most influential governance document outside formal regulation may be the NIST AI Risk Management Framework. The framework offers guidance on mapping, measuring, and managing AI risks across contexts. The primary resource is NIST’s AI Risk Management Framework page.
Healthcare organizations and vendors use NIST language because it provides a shared vocabulary: validity, reliability, fairness, security, and accountability. It also encourages structured risk documentation, which aligns with procurement processes. A hospital that adopts an AI triage system increasingly looks for documentation that resembles NIST-aligned risk assessments, even when not legally required.
International regulation will shape product design, whether U.S. firms like it or not
The EU AI Act adds a global compliance pressure. Health-related AI systems often fall into high-risk categories, triggering requirements for risk management, data governance, and transparency. A concise European Commission overview, The EU AI Act, provides accessible framing. U.S. companies selling in Europe must meet these requirements, and the resulting product design often influences U.S. offerings as well. Governance becomes global through supply chains and compliance programs.
Privacy and consumer data enforcement is increasingly relevant to health AI
Many AI tools interact with health-adjacent data outside the boundaries of HIPAA. Wellness apps, symptom checkers, and consumer wearables often fall into a patchwork of state and federal consumer protection rules. The Federal Trade Commission has signaled a tougher posture on health data misuse, including enforcement and guidance related to the Health Breach Notification Rule, summarized in the FTC’s Health Breach Notification Rule information.
The implication is that a product can be “non-HIPAA” and still face meaningful enforcement if it mishandles sensitive data. For healthcare AI vendors, privacy risk is now a business risk, not merely a compliance detail.
WHO guidance emphasizes ethics and governance for health AI, including large models
Global health bodies are also shaping norms. The World Health Organization has published guidance on the ethics and governance of large multi-modal models for health, reflecting concerns about safety, bias, and accountability. The document Ethics and governance of large multi-modal models for health frames governance as a public health issue and emphasizes the need for evaluation and oversight.
WHO guidance does not impose legal obligations in the way FDA rules do, yet it influences institutional expectations, particularly for public health deployments and international collaborations. It also reinforces a critical point: the goal is not technological novelty, but improved outcomes without harm.
The accountability question: who is responsible when the model is wrong
Governance ultimately becomes a liability and ethics question. If an AI tool suggests a diagnosis that is wrong, who bears responsibility? The clinician who used it? The hospital that bought it? The vendor that trained it? The answer differs by context, yet ambiguity can slow adoption. Clinicians often hesitate to use tools when they cannot predict how error responsibility will be interpreted after an adverse event.
Procurement contracts, indemnification clauses, and internal policies are therefore becoming part of AI governance. This is not philosophical. It is operational.
What good governance looks like in practice
Good governance includes pre-deployment validation, post-deployment monitoring, drift detection, incident reporting, and clear human oversight. It includes documentation that clinicians can understand. It includes equity audits that evaluate performance across demographics. It includes cybersecurity protections and data minimization. It also includes mechanisms for feedback and correction.
The industry is moving, slowly, toward this posture. Regulations and frameworks now provide scaffolding. The remaining challenge is whether organizations implement governance as a genuine safety program or as a compliance theater.
Medical AI will be judged by outcomes, yet outcomes depend on trust. Trust depends on governance. Genius may attract attention. Governance decides whether patients benefit.














