AI-Generated Is Not AI-Approved: What the FDA’s April 2026 Warning Letter Means for Every GxP Team
Author: Sarat Bhamidipati
Article Context:
On April 2, 2026, the FDA issued a warning letter that should be on the desk of every Quality leader in life sciences. It cites several familiar deviations — but embedded in the findings is something newer, and more consequential for how we work: specific, detailed regulator concern about how AI is being used in GxP processes.
The AI-specific observations are not long. They don’t need to be. The message is clear: any uncontrolled or unverified use of AI in a regulated process is now a compliance concern on its own — even if every other area of your quality system is in order.
After re-reading the letter a few times and discussing it with our FICSA colleagues, here is how I read it, and what I’d recommend any Sr. Director or VP of Quality do this quarter.
The Shift in One Line
AI is now treated like any other GxP system. Plus one layer: decision integrity.
For 25 years, our field has focused on data integrity — ALCOA+, audit trails, electronic signatures, and validated computations. That discipline is not going away. But the April 2 letter introduces a sharper concept: the integrity of AI-influenced decisions. A wrong AI-driven decision — a miscategorized deviation, an auto-generated spec reviewed but not understood, a training record signed off on the basis of an LLM summary — can drive patient-impact failures just as surely as a falsified record can.
Regulators have noticed. They are now willing to cite you for it.
What the Letter Actually Tells Us (In Plain Language)
I’ve boiled the AI-relevant takeaways down to the principles we’re now using with clients. None of them will surprise a seasoned CSA practitioner. But they need to be written into SOPs, trained on, and auditable — today.
- Human-in-the-loop (HITL) is mandatory.
- 21 CFR 211.22(c) still applies — fully.
- Accountability cannot be delegated to AI.
- AI outputs are not inherently compliant.
- AI lacks context awareness unless it’s given one.
- Formal document control still applies to AI-drafted content.
- Critical content needs SME verification.
- Use AI to accelerate drafting. Do not use it to make compliance decisions.
- Classify AI use cases by risk.
- Decision integrity — not just data integrity.
- Govern AI inside the QMS.
- Keep an audit trail of AI usage.
- Train your people on AI limits.
- Build fallback paths.
- Evaluate AI tools before using them in GxP activities.
- Domain knowledge stays with your team.
- Review AI performance periodically.
- Write clear SOPs and Work Instructions for AI use.
- Quality Unit oversight is non-negotiable.
Every AI output used in a GxP context must be reviewed, verified, and approved by qualified personnel. AI is assistive. It cannot replace regulatory judgment or Quality Unit decision-making.
The Quality Unit must review and approve all procedures and records. AI-generated content gets the same level of QU control as anything else. No exceptions for “the model wrote it.”
The tool is not responsible for compliance. Your signatories are. When a human signs an AI-generated document, they are taking ownership of its correctness.
They must be validated against current regulations, your site practices, and product-specific requirements. This is also why we record and document the model training and fine-tuning process — if you can’t explain what the model learned, you can’t defend what it produced.
A general-purpose model doesn’t know your site’s master batch record conventions, your product-specific requirements, or the latest FDA guidance. Without proper training or retrieval grounding, it will confidently produce content that looks right and isn’t.
Review. Approval. Versioning. Training. Signatories must read before signing — not skim.
Technical documentation, specifications, MBRs, procedures — the draft can come from AI. The verification cannot. Subject matter experts are non-negotiable.
This is the cleanest mental model I can offer a team starting out. Faster input — not lower judgment.
Higher risk demands more stringent reviews. Risk-based classification is how you scale oversight without drowning in it.
An incorrect AI-driven decision can cascade into a product or patient safety failure. Treat decisions with the same rigor you treat data.
Validation, change control, SOPs, training, periodic review — the same frameworks we’ve used for every other regulated system. AI doesn’t sit outside the QMS. It sits inside it.
Inputs, outputs, reviewers, and timestamps. If an inspector asks, “Show me how this was generated and who approved it,” you need to have an answer ready.
Users need to understand what the tool can do, what it can’t, and where their own responsibility starts. This is one of the highest leverage controls you can put in place.
If the AI is wrong or unavailable, the process must still function compliantly. Think of it like any other system recovery procedure.
Intended use. Known limitations. Suitability. Before — not after — they touch a regulated process.
Regulatory awareness cannot be outsourced to AI. Your SMEs own the domain. AI supports them.
Models drift. Data shifts. Your review cadence should catch errors before they compound.
Define when and how AI can be used, and what level of review applies to each use case.
Any AI-assisted GxP activity falls under the QU. Full stop.
What I’d Do This Quarter If I Were You
If you’re a Sr. Director or VP of Quality, here is a realistic 90-day path that most of our clients are running some version of:
- Weeks 1–2: Inventory every AI use case already in flight — sanctioned or not. You’ll be surprised by what you find.
- Weeks 3–4: Classify each use case by GxP risk. Identify which ones need immediate governance, which can wait.
- Weeks 5–8: Draft or update AI governance SOPs — HITL requirements, QU oversight, audit trail expectations, fallback procedures.
- Weeks 9–10: Train users on the new SOPs and on AI limitations.
- Weeks 11–12: Run an internal mock audit focused on AI usage. Fix what you find.
A Final Thought
Our field has been here before — when electronic signatures were new, when cloud systems were new, when CSA replaced legacy CSV. Every time, the question is the same: how do we adopt the benefit of technology without losing the discipline that keeps patients safe?
AI is no different. The principles haven’t changed. The scope has.
The organizations that recognize this now — and build AI governance into their QMS the same way they built in every other control — will be the ones moving fastest through FDA validation a year from today. The ones waiting will be in remediation.
I know which side of that line I want to be on. And I know which side our clients want to be on.
Want to know where your organization stands? We built a free 45-minute AI Readiness Assessment specifically for Sr. Directors and VPs of Quality. It identifies your highest-priority gaps before an inspector does. Take the assessment.
Author:
Sarat Bhamidipati
Sarat Bhamidipati is the CEO of Compliance Group, a life sciences consulting and technology firm specializing in GxP, CSA, and AI governance. Compliance Group co-authored FDA CSA guidance through FICSA and ISPE, serves 60+ life sciences clients, and is the developer of the iQuality AI-native quality platform.
Submit the form below, and our expert will reach out to assist you!