The FDA has recently issued new draft guidance aimed at the regulation of artificial intelligence and machine learning in medical software. This is particularly significant for technology startups focused on developing AI tools for healthcare. The guidance specifies requirements for pre-market approval, as well as ongoing lifecycle management.
Essentially, it details how AI and machine learning-based health software should be constructed, monitored, and updated, thereby shaping the future regulatory landscape for AI in healthcare.
Background context: The integration of AI and machine learning in healthcare has been transformative, offering enhanced diagnostic tools and personalised treatment options.
However, the rapid advancement of these technologies necessitates a structured regulatory framework to ensure patient safety and software efficacy. The FDA’s draft guidance is a step towards providing that structure, ensuring that innovations in medical AI continue to meet high standards of quality and reliability.
A New Approach
In practical terms, the FDA’s draft guidance introduces a framework that aligns more closely with the dynamic nature of machine learning systems, which often evolve post-deployment through real-world learning.
One of the notable elements is the emphasis on a “Predetermined Change Control Plan,” which allows developers to outline expected future modifications during the pre-market submission.
This approach signals a shift from static product approvals to a more flexible regulatory pathway that acknowledges iterative development, provided updates are well-characterised and their risks properly mitigated. Such flexibility is especially relevant for startups whose competitive edge often hinges on rapid innovation and adaptation.
The guidance underscores the importance of transparency and robust data governance, particularly around the datasets used to train and validate AI algorithms.
This includes detailed documentation of data provenance, labelling practices, and the representativeness of training data across diverse patient populations. For developers, this could translate into more rigorous internal audit processes and heightened collaboration with clinical stakeholders to ensure ethical data use. For the broader industry, it reflects a growing consensus that AI’s promise in healthcare is inseparable from its accountability – both in technical performance and in equitable access and outcomes.
Key Data and Industry Impact
- Draft Guidance Publication:
The FDA published the “Marketing Submission Recommendations for a Predetermined Change Control Plan (PCCP) for AI/ML-enabled Device Software Functions” in May 2024. This is the agency’s most comprehensive regulatory proposal for AI/ML-based software as a medical device (SaMD) to date. - AI Adoption in Healthcare:
- 44% of US hospitals are using or piloting AI/ML solutions in clinical care as of 2025, up from 22% in 2022.
- The US healthcare AI market is projected to reach $41.4 billion by 2027, with over 55% of startups focusing on clinical or diagnostic software.
- Premarket Approval and Life Cycle Management:
- All AI/ML software intended for medical use will require detailed pre-market submissions, including technical, clinical, and data validation evidence.
- Companies must submit a Predetermined Change Control Plan (PCCP) outlining anticipated modifications, performance metrics, and risk mitigation before market entry.
- Transparency and Data Governance:
- The FDA requires comprehensive documentation of data provenance, labelling practices, and representativeness.
- The guidance strongly encourages the use of diverse and real-world clinical data for training and validation to ensure equitable outcomes.
- Ongoing Monitoring and Iterative Improvement:
- The draft guidance establishes more flexible post-market oversight, supporting real-world evidence (RWE) collection and iterative updates, with periodic FDA reporting.
Practical Implications
- For Startups:
- Ability to declare anticipated model updates in advance, expediting innovation and iterative improvement while maintaining regulatory compliance.
- Must establish robust internal data audit practices and ensure collaboration with clinicians, especially around representative training data and outcome monitoring.
- For Regulators and Patients:
- Greater transparency, traceability, and accountability in AI-powered diagnostics and treatment planning.
- Accelerates time-to-market for innovation while safeguarding clinical effectiveness and patient safety.
Why This Matters
- Patient Safety:
The new framework aims to balance rapid technological advancement with the assurance of safety, effectiveness, and equity. - Innovation:
Flexible, iterative pathways allow AI developers—especially startups—to improve products after deployment, reflecting the evolving nature of machine learning. - Industry Standards:
Provides a clear direction for compliance, shaping global best practices for medical AI governance.
References
- FDA: Marketing Submission Recommendations for a Predetermined Change Control Plan for AI/ML-Enabled Device Software (Draft Guidance, May 2024)
- DLA Piper: FDA Issues New Draft Guidance for AI/ML-Enabled Medical Devices (2024)
- MedTech Dive: FDA Moves Toward AI/ML Medical Device Regulation (2024)
- STAT News: FDA Proposes New Approach for AI in Health Devices (2024)