Skip to content
The Donald E. Pray Law Library, University of Oklahoma College of Law

Delay, Deny, Defend: AI-Powered Health Insurance Claim Denials

April 11, 2025

Artificial intelligence (AI) is transforming industries, and the health insurance industry is no exception. But when AI becomes a health care gatekeeper, legal and ethical red flags fly high. In her latest essay, legal scholar Stacey A. Tovino explores the disturbing reality of AI-powered claims denials and the growing backlash in courts and legislatures across the country.

A Modern-Day “Delay, Deny, Defend” Strategy

The title of Tovino’s piece echoes Rutgers Professor Jay Feinman’s 2010 exposé, Delay, Deny, Defend, which critiqued the insurance industry’s cost-saving strategies of stalling, rejecting, and litigating claims. Today, similar tactics have resurfaced—but with an AI twist. Insurers are allegedly using flawed algorithms to override physicians’ recommendations, terminate medically necessary care, and force patients into long and costly appeals processes.

Case Study: UnitedHealthcare Under Fire

The heart of Tovino’s article examines Estate of Lokken v. UnitedHealth Group, a class action lawsuit that accuses UnitedHealthcare of relying on an AI model called nH Predict to deny coverage for post-acute care. Plaintiffs allege that the model used generic data to make rigid predictions—ignoring patients’ actual medical conditions—and that human review was either absent or superficial.

Two plaintiffs, Gene Lokken and Dale Tetzloff, reportedly suffered physical and financial harm after their Medicare Advantage benefits were terminated prematurely based on AI-generated predictions. The court allowed parts of Lokken’s case to proceed, notably breach of contract and bad faith claims, even while dismissing others as preempted by federal Medicare law.

Broader Pattern: Humana and Cigna Face Similar Claims

UnitedHealthcare isn’t alone. Similar lawsuits have been filed against Humana and Cigna. In the Humana case, plaintiffs claim the same nH Predict tool was used to deny care. Cigna faces accusations of deploying a proprietary algorithm (PXDX) to batch-deny hundreds of thousands of claims with mere seconds of human oversight.

These cases point to an emerging trend: replacing nuanced human judgment with automated, opaque, and often inaccurate decision-making tools.

Regulatory Response: The Federal Government Steps In

In response to public concern, the Centers for Medicare and Medicaid Services (CMS) issued a Final Rule in 2023 clarifying that medical necessity determinations must be based on individual patient circumstances and in 2024 CMS issued a guidance document stating that medical necessity determinations must not be based on AI-generated predictions alone. CMS emphasized that while AI can assist in decision-making, it cannot replace the critical, case-specific evaluations made by licensed physicians.

However, Tovino argues that these rules remain vague, leaving loopholes. For instance, what constitutes “assistance” by AI? Could insurers still use AI to pre-fill decisions that physicians rubber-stamp in bulk?

State Legislation: California Leads the Way

California has taken a bolder stance. Its new “Physicians Make Decisions Act” (PMDA), effective January 2025, mandates that only licensed professionals—not algorithms—can make medical necessity decisions. The law also demands transparency in claims processes and requires insurers to disclose their use of AI tools.

Other states, including Arizona, Nebraska, Maryland, and Washington, are introducing their own AI-specific bills. While their approaches vary, the common goal is clear: to ensure that AI doesn’t replace human medical judgment and that patients are treated fairly.

Proposals for Stronger Protections

Tovino concludes with several policy recommendations:

  • Anti-retaliation protections for employees who push back against AI-driven denials.
  • AI transparency, requiring insurers to disclose how and when AI is used.
  • Clarifications in statutory language, such as defining “assist,” “supplant” and “solely” in the context of AI involvement.
  • Public reporting on how often AI-generated denials are reversed on appeal—data that could expose the extent of the problem.

The Bottom Line

As AI tools become embedded in the health care system, the legal system is scrambling to catch up. Tovino’s article makes it clear: while automation can increase efficiency, it must not come at the cost of care, transparency, and accountability. Lawmakers and regulators have a responsibility to ensure that the promise of AI doesn’t become a peril for patients.

Posted in: