KENTUCKY—A class action lawsuit filed in Kentucky on Tuesday, Dec. 12 alleged that Humana illegally used artificial intelligence (AI) to deny patients care owed to them under their Medicare Advantage (MA) Plans.
The lawsuit alleged that Humana uses the AI model nH Predict to override treating physicians' determinations as to medically necessary care, including post-acute care coverage. It also claims Humana knows the model's predictions are "highly inaccurate" yet still uses the system.
Humana is one of the largest insurers in the country, with close to 6 million MA members.
"Humana systematically deploys the AI algorithm to prematurely and in bad faith discontinue payment for healthcare services for elderly individuals with serious diseases and injuries," the lawsuit said. "These elderly patients are left either with overwhelming medical debt, or without the medical care that they require."
One of the plaintiffs named in the Humana class action lawsuit is 86-year-old JoAnne Barrows, who fell at home, fracturing her leg. Barrows was admitted to a hospital, placed in a cast and put on a non-weight-bearing order for six weeks. After being discharged, she was admitted to a rehabilitation facility. After only two weeks of care, Humana informed Barrows they were terminating her care.
Barrows and her doctor were "bewildered" by the premature termination as she was still on a non-weight-bearing order. Despite her doctor's recommendation that she continue rehabilitation treatment, Humana refused to cover it, denied her appeals and deemed her ready to return home even though she was bedridden and using a catheter.
Barrows and her family paid for a less expensive assisted living facility out of pocket where she "deteriorated" due to a lack of quality care and returned home.
Additionally, the lawsuit accuses Humana of disciplining and terminating Humana employees who deviate from the nH Predict AI model regardless of whether a patient requires more care.
Humana is not the first insurer to face allegations of using AI to deny medically necessary care for MA members. UnitedHealth Group had a similar suit filed against it in November that alleged it used the same model, nH Predict, to deny care or prematurely discharge elderly patients from care facilities.
The United Health lawsuit claims nH Predict has a 90% error rate, as 90% of patient claim denials are reversed after internal appeal processes or through federal administrative law judge proceedings.
The two lawsuits show how AI being used in health care is facing ongoing scrutiny, especially with allegations from the Humana lawsuit which said the model saves the company money by allowing them to deny claims they are obligated to pay, instead shifting financial responsibility to American taxpayers.
In October, President Joe Biden issued an executive order establishing new standards of ai safety and security, and bipartisan anger has been directed at MA care denials. In the new year, the Centers for Medicare & Medicaid Services will begin restricting how MA plans use predictive technology tools to make certain coverage decisions.