The False Claims Act (FCA) has been the federal government’s most potent weapon against fraud since the Civil War. Often called “Lincoln’s Law,” it was originally designed to stop unscrupulous contractors from selling the Union Army sawdust instead of gunpowder. But as we move through 2026, the “sawdust” of the modern era isn’t a physical substance—it’s code, algorithms, and automated prompts.
Artificial Intelligence (AI) has fundamentally reshaped the landscape of government spending. From AI-driven diagnostic tools in healthcare to autonomous defense systems, billions of taxpayer dollars are now flowing into AI-enabled solutions. Where the money goes, fraud inevitably follows.
For the modern whistleblower, understanding the intersection of AI and the False Claims Act is no longer optional—it is the key to uncovering the next generation of “quiet” corporate theft.
The Three Pillars of AI Fraud Under the FCA
Under the False Claims Act, anyone who knowingly submits a false claim for payment to the government is liable for treble damages (three times the loss) plus significant penalties. In the context of AI, we are seeing three distinct “fraud archetypes” emerge:
“Algorithm Inflation” (Medical Necessity & Coding)
In healthcare—which accounted for $5.7 billion of the record $6.8 billion in FCA recoveries in FY 2025—AI is being used to “optimize” billing. However, there is a fine line between optimization and fraud.
The Scheme: Companies are increasingly using AI-enabled Electronic Health Records (EHRs) that use “predictive nudges” to encourage doctors to select higher-paying diagnosis codes that aren’t supported by the patient’s actual condition. This is known as Upcoding by Algorithm.
Real-World Example: In early 2026, the DOJ announced a $556 million settlement with Kaiser Permanente affiliates involving “risk adjustment” fraud. While traditional in nature, the investigation highlighted how automated systems were used to retrospectively add improper diagnoses to patient records to inflate Medicare Advantage payments.
“The Black Box” Fraud (Misrepresented Capabilities)
Government contractors (especially in Defense and Cybersecurity) often sell AI tools as “proprietary black boxes.” Because the government cannot always see “under the hood,” contractors may be tempted to lie about what the AI can actually do.
- Accuracy Inflation: Claiming an AI model has a 99% accuracy rate in threat detection when it actually produces 40% false negatives.
- Bias Concealment: Failing to disclose that an AI used for government hiring or benefit allocation is trained on biased data that violates civil rights statutes.
Cybersecurity & The Civil Cyber-Fraud Initiative
The DOJ’s Civil Cyber-Fraud Initiative is now a major driver of FCA cases. In 2026, the focus has shifted to “AI Integrity.” If a contractor sells an AI system to the government but fails to disclose that the model is vulnerable to “adversarial attacks” or “data poisoning,” they are effectively selling a defective product.
Case Study: AI-Driven ER Billing Fraud in Sanders v. UCHealth
In United States, et al. ex rel. Sanders v. University of Colorado Health et al., No. 21-cv-1164 (D. Colo.), UCHealth hospitals allegedly coded certain claims for emergency room visits using CPT 99285 automatically. This coding would be used whenever UCHealth’s providers checked a patient’s vital signs more times than the total number of hours the patient was present in the emergency department. It was alleged that UCHealth knew that its automatic coding rule “did not satisfy the requirements for billing to Medicare and TRICARE because it did not reasonably reflect the facility resources used by the UCHealth hospitals.”
In a landmark settlement, UCHealth paid $23 million to resolve the allegations brought against them that they had violated the False Claims Act by falsely coding certain claims to receive payment from federal health care programs for visits to its emergency departments.
“Fraudulent billing by healthcare companies undermines Medicare and other federal healthcare programs that are vital to many Coloradans,” said Acting U.S. Attorney Matt Kirsch for the District of Colorado. “We will hold accountable health care companies who adopt automatic coding practices that lead to unnecessary and improper billing.”
The “Professional” Whistleblower: Using AI to Catch AI
One of the most fascinating shifts in 2026 is the rise of the Relator 2.0. You no longer need to be a disgruntled executive with a briefcase full of memos to be a whistleblower.
Data-Driven Qui Tams
Professional whistleblowers and specialized law firms are now using proprietary AI models to mine public datasets (like Medicare billing data or government contract awards) to find statistical anomalies. If a company’s billing patterns suddenly shift in a way that defies clinical logic, AI can flag it as a “potential fraud” before the government even notices.
The 120-Day Rule and AI Drift
The DOJ now expects companies to self-report “algorithmic drift” that results in overbilling. If a company notices its AI is over-claiming and stays silent for more than 120 days, a whistleblower who reports it first stands to gain a massive percentage of the recovery (typically 15% to 30%).
Whistleblower Protections: The AI Whistleblower Protection Act (AIWPA)
Coming forward in the tech world is notoriously difficult. Tech workers are often bound by complex IP agreements and “innovation” NDAs.
In May 2025, a bipartisan group led by Senator Chuck Grassley introduced the AI Whistleblower Protection Act (AIWPA). This law specifically protects individuals who disclose:
- AI Security Vulnerabilities: Flaws that could lead to data breaches or system takeovers.
- AI Violations: Conduct where an AI model is used to violate existing laws (like the FCA).
Crucially, the AIWPA prohibits contractual waivers. You cannot sign away your right to be an AI whistleblower in an employment contract or a severance agreement. If you see an algorithm being “tuned” to defraud the government, federal law now provides a clear, protected path to the DOJ.
Why 2026 is the “Year of the AI Relator”
The numbers don’t lie. In FY 2025, whistleblowers filed a record-breaking 1,297 qui tam lawsuits. The success of these claims—returning over $5.3 billion to the Treasury in a single year—has created a “gold rush” for high-quality information.
As the government scales its own “Health Care Fraud Data Fusion Center” and “AI Litigation Task Force,” the demand for insiders who can explain how a specific algorithm was manipulated is at an all-time high. The DOJ has made it clear: they have the data to find the “what,” but they need whistleblowers to prove the “how” and the “who.”
Conclusion: The Ethics of the Algorithm
The False Claims Act has always been about accountability. In the 1860s, it was about the quality of gunpowder. In the 1980s, it was about the price of toilet seats in the Pentagon. In 2026, it is about the integrity of the algorithm.
If you are a developer, a data scientist, or a compliance officer who sees a “glitch” that looks suspiciously like a profit center, you aren’t just seeing a technical error. You are seeing a potential False Claims Act violation. In an era where AI moves at the speed of thought, the law is finally catching up—and it’s paying the people who help it do so.


