Calm. Methodical. Evidence-Based.

Norms Impact

FDA’s New Drug Approval AI Is Generating Fake Studies: Report

A federal health agency is scaling an AI tool that employees say invents studies, while leadership touts faster drug approvals—collapsing the norm that safety decisions must be grounded in verifiable evidence.

Executive

Jul 23, 2025

Sources

Summary

FDA employees told CNN that Elsa, the agency’s generative AI tool, produces nonexistent citations and misrepresents research when used in FDA work.
HHS Secretary Robert F. Kennedy Jr. is pressing the FDA to accelerate drug approvals using AI, while the agency has already deployed Elsa broadly and framed it internally as a cost-effective success.
If fabricated or distorted outputs seep into regulatory analysis, our drug-approval process can be sped up at the exact point where accuracy is non-negotiable, putting public health and trust at risk.

Reality Check

Normalizing fabricated citations inside federal drug review sets a precedent where our rights to safe medicines and honest government records are subordinated to speed and optics.
On the facts provided, this is not clearly criminal, but it is a grave governance failure: pushing a known error-prone system into workflows tied to “increase the speed of drug approvals” invites reckless decision-making and institutionalizes unverifiable analysis.
If false statements or fabricated materials are knowingly used in official submissions to Congress or to influence federal decisions, exposure could shift toward federal false-statement and fraud territory (including 18 U.S.C. § 1001), but the immediate injury described is the erosion of evidence-based administration and accountable regulatory process.

Media

Detail

<p>HHS Secretary Robert F. Kennedy Jr. has promoted the use of generative AI at agencies including the FDA and said AI will be used to approve drugs “very, very quickly,” while also testifying to a House subcommittee in June that AI is already being used to “increase the speed of drug approvals.”</p><p>CNN reported, based on interviews with six current and former FDA employees, that the FDA’s AI tool “Elsa” can generate nonexistent studies and misrepresent research. Three employees described using Elsa for tasks such as meeting notes and summaries, and three reported that it “hallucinates” by inventing citations and giving confident but incorrect answers to basic questions, including drug-approval counts for children.</p><p>FDA Commissioner Marty Makary deployed Elsa across the agency on June 2. An internal slide leaked to Gizmodo described the tool as “cost-effective,” with a reported cost of $12,000 in its first four weeks. Makary told CNN Elsa could “potentially hallucinate,” and FDA official William Maloney said CNN mischaracterized the information but did not specify inaccuracies when asked.</p>