AI and Software as Medical Devices in 2025: Regulatory Landscape and How to Navigate It
- swichansky2
- Oct 7
- 2 min read
AI and software are revolutionizing medtech, but regulators are raising the bar. In the U.S., FDA has finalized guidance on Predetermined Change Control Plans (PCCPs), reinforced cybersecurity requirements, and published GMLP principles. In Europe, the AI Act will impose strict requirements for high-risk AI, overlapping with MDR/IVDR.
FDA’s PCCPs for AI Devices
PCCPs allow manufacturers to pre-define algorithm modifications in advance, avoiding new submissions for each update. PCCPs must specify:
Types of anticipated changes
Guardrails and limits
Validation methods and update processes
Not all changes qualify – major new indications still need submissions. PCCPs demand robust quality systems for data management, testing, and monitoring.
Good Machine Learning Practice (GMLP)
FDA, along with other regulators, emphasizes 10 guiding principles:
Multidisciplinary expertise throughout lifecycle
Secure software engineering
Representative datasets to avoid bias
Independent training and test sets
Human-AI team performance considerations
Transparency in outputs and labeling
Ongoing monitoring and risk management
Following GMLP isn’t legally mandated but is highly recommended, and FDA reviewers implicitly expect to see it.
Cybersecurity Mandates
As of 2025, FDA can refuse submissions that lack cybersecurity documentation. Required elements include:
Cybersecurity risk assessments
SBOMs (Software Bill of Materials)
Vulnerability disclosure programs
Secure update mechanisms
Testing evidence (e.g., penetration testing)
Cybersecurity is now an explicit QMS element and must be planned across design, verification, and post-market.
The EU AI Act
Effective August 2024, with obligations applying to high-risk AI from August 2027, the Act layers on top of MDR/IVDR. It requires:
AI-specific risk management including bias and fundamental rights risks
Data governance ensuring representative, high-quality datasets
Transparency and human oversight provisions
Logging and recordkeeping for traceability
Separate conformity assessment for AI, likely handled by NBs with AI designation
Preparing for Dual Compliance
Perform gap analyses between MDR and AI Act requirements
Expand QMS to include AI governance and data management
Track harmonized standards for presumption of conformity
Validate AI across diverse demographics to demonstrate fairness
Engage early with FDA and EU regulators to align expectations
Strategic Implications
For executives, AI regulation means additional costs, resources, and governance – but also a chance to differentiate. Firms that demonstrate trustworthy AI can gain faster adoption and market trust. Product leaders should plan AI roadmaps with compliance in mind, phasing launches strategically. QA/RA leaders must build AI-specific quality processes and documentation.
Preparing for the Future
AI regulation is catching up fast. Companies that integrate compliance into innovation will thrive, while laggards risk delay or exclusion.
How PRP Compliance Can Help: We support companies in PCCP development, AI Act readiness, and building AI/ML compliance frameworks. Partner with us to bring safe, effective, and compliant AI-driven devices to market.



