FDA Calls Out Inappropriate Use of Artificial Intelligence

Axendia’s Analysis of the Root Causes and Our Recommendations to Prevent Them 

There has been a lot of buzz around FDA’s warning letter which for the first time, explicitly cited the inappropriate use of Artificial Intelligence as a contributing factor to cGMP violations.  While this marks a defining moment in the evolution of AI adoption across regulated industries, it is important to dive into the findings to understand the implications for Life Science organizations.

Axendia’s analysis continues to show FDA’s aggressive timeline to scale AI across the agency and their support for the appropriate and responsible adoption of the technology across the industry.  As we noted in Axendia’s 2026 Life Sciences Radar event, as AI systems begin to influence regulated processes, governance becomes inseparable from deployment. Human oversight remains essential, especially in an industry where product quality, regulatory compliance, and patient safety are involved.

To be clear, this enforcement action is not about the use of AI. Regulators support the use of AI and that position is reinforced internationally. In January 2026, FDA and EMA jointly issued common principles for AI in medicine development, describing AI as applicable across the medicines lifecycle and advancing “safe, ethical and aligned AI practices.” EMA also states more broadly that AI is key to leveraging large volumes of regulatory and health data and can support regulatory decision making for safe, effective, and high-quality medicines. In other words, both agencies are not merely tolerating AI. They are building a framework for its responsible use.

The Warning Letter exposes a deeper issue: some organizations are adopting advanced technologies faster than they are modernizing the mindsets, operating models, and quality systems required to use them responsibly.  

This article features Axendia’s analysis of the root causes behind this enforcement action and provides our recommendations to prevent them.This article features Axendia’s analysis of the root causes behind this enforcement action and provides our recommendations to prevent them.

Don’t Treat AI as the Oracle (Authority), it’s a Tool

According to the Warning Letter, the firm “utilized artificial intelligence (AI) agents … to create drug product specifications, procedures, and master production or control records to be in compliance with FDA requirements.”  This would not be a problem except for the fact that when FDA informed the firm that required process validation had not been performed. “FDA investigators found that you had not conducted process validation prior to distribution of your drug products, as required under 21 CFR 211.100, and informed you as such. You replied that you were not aware of the legal requirement, as the AI agent you used, never told you it was required.”

This is the textbook definition of overreliance. AI was treated as a source of truth rather than a tool that supports human judgment.

To prevent this pitfall, organizations must establish a human in the loop approach, and the human must have authority in all AI‑assisted processes. AI can support decision making, but it cannot define regulatory requirements or replace scientific judgment. AI outputs should be treated as drafts that require human interpretation, verification, and approval. Accountability must remain with qualified personnel, not with algorithms.

Human in the Loop 

FDA made the expectation unambiguous: “If you use AI as an aid in document creation, you must review the AI generated documents to ensure they were accurate and actually compliant with CGMP.” The Firm failed to do so and FDA concluded: “Your failure to do so is a violation of 21 CFR 211.22(c).”

This is not an AI problem; it is an accountability problem. Preventing similar finding in the future requires reinforcing Quality Unit ownership of all AI‑generated content. AI may accelerate drafting, but it cannot assume the role of the Quality Unit. Human review, verification, and approval must be embedded into document control processes. AI should be treated as an input, never a decision maker.

Product and Process Understanding is Foundational

In the Warning Letter, FDA noted an “overreliance on artificial intelligence for your drug manufacturing operations was also documented during the inspection.” In this instance, AI was used to generate documentation intended to demonstrate compliance, but the underlying processes had not been validated.

This reveals a deeper issue: the firm was unable to demonstrate its scientific knowledge, process characterization and operational understanding required to evaluate whether AI outputs were meaningful, trustworthy or correct. FDA is explicit that decisions must be based on reliable, attributable, and accurate data.  If firms cannot trace how an AI output was generated or what data informed it, FDA can interpret that as a data integrity failure.

FDA expects firms to stand behind decisions with evidence; not defer to a system. “The model said so” is not defensible. FDA expects independent verification.

Addressing this requires strengthening foundational product and process knowledge before introducing AI. Organizations must understand their processes, characterize variability and risk, and ensure that any AI model is trained on accurate, complete, and representative data. AI cannot compensate for missing fundamentals. Process understanding and reliable data is the foundation of digital integrity. 

AI doesn’t change the rules. It can expose where those rules aren’t being followed.

Governance and Lifecycle Controls

FDA’s position is clear, “If you plan to resume drug production, and use AI to help with CGMP activities, such as development of procedures and specifications, any output or recommendations from an AI agent must be reviewed and cleared by an authorized human representative of your firm’s QU.”

This reflects a common adoption pattern: Technology is implemented without the operating model needed to support it.

Preventing this requires implementing structured AI governance and lifecycle management. AI systems must have defined intended use, documented boundaries, validation for accuracy and reliability, and ongoing monitoring for drift or degradation. Governance must be cross‑functional, spanning Quality, Regulatory, IT, Data Science, and Operations. AI must be managed as a controlled system, not a shortcut.

In Brief

FDA’s warning letter citing the inappropriate use of Artificial Intelligence is more than an enforcement action. It is a clear signal that while regulators accept the proper use of AI, this comes with an expectation that the technology is used responsibly, transparently, and within the boundaries of established predicate requirements and quality systems.

The lesson is simple. AI can accelerate progress, but it cannot replace scientific understanding, regulatory judgment, or human accountability.

As we discussed in 2026 Life Sciences Radar, AI maturity is not defined by speed of adoption alone. It is defined by structured integration and alignment with data governance, validation frameworks, and regulatory expectations. As AI capabilities evolve, organizations must ensure that accountability, traceability, and compliance remain embedded in their operating models.

Organizations that embrace AI with discipline and governance will unlock its full potential.
Those that treat AI as a shortcut will face increasing regulatory scrutiny. 

Axendia will continue to provide guidance and insights for industry leaders as they navigate this new era of intelligent systems in regulated environments.

Related Content

The opinions and analysis expressed in this post reflect the judgment of Axendia at the time of publication and are subject to change without notice. Information contained in this post is current as of publication date. Information cited is not warranted by Axendia but has been obtained through a valid research methodology. This post is not intended to endorse any company or product and should not be attributed as such.

Scroll to Top
Share via
Copy link