Feds Have Warned Medicare Insurers That ‘AI’ Can’t Be Used To (Incompetently And Cruelly) Deny Patient Care
from the I'm-sorry-I-can't-do-that,-Dave dept
“AI” (or more accurately language learning models nowhere close to sentience or genuine awareness) has plenty of innovative potential. Unfortunately, most of the folks actually in charge of the technology’s deployment largely see it as a way to cut corners, attack labor, and double down on all of their very worst impulses.
Case in point: “AI’s” rushed deployment in journalism has been a keystone-cops-esque mess. The brunchlord types in charge of most media companies were so excited to get to work undermining unionized labor and cutting corners that they immediately implemented the technology without making sure it actually works. The result: plagiarism, bullshit, a lower quality product, and chaos.
Not to be outdone, the very broken U.S. healthcare industry is similarly trying to layer half-baked AI systems on top of a very broken system. Except here, human lives are at stake.
For example UnitedHealthcare and Humana, two of the largest health insurance companies in the US, have been using “AI” to determine whether elderly patients should be cut off from Medicare benefits. If you’ve navigated this existing system on behalf of an elderly loved one, you likely know what a preposterously heartless shitscape this whole system already is long before automation gets involved.
Not surprisingly, neither Humana or UnitedHealthcare’s implementation of “AI” was done well. A recent investigation by STAT of the system they’re using (nH Predict) showed the AI consistently made major errors 90% of the time, cutting elderly folks off from needed care prematurely, often with little recourse by patients or families. Both companies are facing class actions.
Any sort of regulatory response has, unsurprisingly, been slow to come by courtesy of a corrupt and incompetent Congress. The best we’ve gotten so far is a recent memo by the Centers for Medicare & Medicaid Services (CMS), sent to all Medicare Advantage insurers, informing them they shouldn’t use LLMs to determine care or deny coverage to members on Medicare Advantage plans:
“For coverage decisions, insurers must “base the decision on the individual patient’s circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient’s medical history, the physician’s recommendations, or clinical notes would not be compliant,” the CMS wrote.”
There’s no indication yet of any companies facing actual penalties for the behavior. The letter notes that insurers can use AI to determine whether an insurer is following plan rules, but it can’t be used to sever grandma from essential care. Because, as we’ve seen repeatedly, LLMs are prone to error and fabulism, and executives of publicly traded companies are prone to fatal corner cutting.
Granted this is only one segment of one industry where undercooked AI is being rushed into deployment by executives who see the technology primarily as a corner cutting, labor undermining shortcut to greater wealth. The idea that Congress, regulators, or class actions lay down some safety guard rails before this kind of sloppy AI results in significant mass suffering is likely wishful thinking.
Filed Under: ai, healthcare, insurance, llm, medicare, medicare advantage, nh predict
Companies: humana, unitedhealthcare
Comments on “Feds Have Warned Medicare Insurers That ‘AI’ Can’t Be Used To (Incompetently And Cruelly) Deny Patient Care”
“The lawsuit, filed in the US District Court in western Kentucky, is led by two people who had a Humana Medicare Advantage Plan policy ”
Medicare Advantage ..
could be a significant contributor to this problem
But the companies will still use said LLM/AI products until they get caught/a whistleblower steps up, which prompts a multi year investigation and legal/lobbying fight, which will lead to a 50 dollar fine.
Meanwhile care will be denied to 100k plus people, which saves the insurance companies billions that they wouldn’t have to pay for covered care, which raises the profits, which raises their stock prices, which raises the CEOs salary.
If only the ones doing the enforcements had the ability to make larger threats. But lobbyists and the captured (mostly republican) politicians will fight tooth and nail to make sure that never happens by distracting everybody with “Oh look! Border issues! Oh look, save the children! Oh look, we shut down the gov again because things aren’t going our way even when we lost!”
The School House Rock Bill threw itself in the shredder 20 years ago when it saw what politics had become.
LLMs and AI are exactly the same thing? Identical in every respect? So much so that the terms are entirely interchangeable?
I had no idea they were twinsies.
Now that is the kind of information I’ve come to expect!
Re:
I feel like you’re trying to make a point here, but I’m at a loss to what it is.
Re: Re:
White supremacist harassers are merely here to lie, gaslight and malign everything remotely left of their ideology.
All they want to do, since they can’t outright shoot the staff, is to force them to sing to Trump’s tune.
Re: Re:
It’s the same one and only point LittleCupcakes has made with every single one of their comments here: They’re a moron.
Re: Re: Re:
Actually, LittleCupcakes is ignorant, as are you for using an ableist word. You wouldn’t call someone a fag based on your perception of their abilities, so don’t call them a moron.
Re: Re: Re:2
Since when is “moron” ableist?
Re: Re:
I think the point is that tech people understand that the term is LLM, but the media has convinced people that this is some kind of artificial intelligence (AI) that will eventually take over the world like Skynet.
Re: Re: Re:
The only reason the media runs with it is because IT industry corporate marketing and C-levels keep saying “AI” with the obvious approval of their bosses and apparent approval of all the other code wranglers. Not that the media couldn’t be less stupid and push back.
Re:
No, they used an algorithm, not a chatbot. Even then, not all chatbots are run by LLMs.
Re: Re:
Actually, algorithms are absolutely examples of AI, albeit more limited than heuristics.
This is my real fear of ai; not that Skynet will wake up one day and nuke us all, but a thoughtless and dangerous enshittifcation of the things that actually affect people’s lives, especially against those who all already harmed by the biases systemic in society
Re:
Or even worse, using it while knowing full well that people will get hurt
Re: Re:
Why else do you think they’re dumbing everyone down all while telling the morons an algorithm is AI when it isn’t. It’s a way of deflecting blame in such a way that the thoroughly stupefied masses won’t be able to manage an ideological or logical point or defense against because the AI is held up as magic to the morons. Anyone enforcing or defending this doesn’t realize the road to hell they’re paving for themselves. But society is lead by the greedy and insane at this point so it shouldn’t be any surprise hard lessons will have to be learned by big mistakes.
Re: Re: Re:
After all, safety regulations were paid in blood and bodies.
So shall the new regulations on software be written.
ANd by then it will be too late.
OptumServe the Huge HHS/CMS Contractor
Here we go again, and you can read all of this at my Twitter feed @medicalquack…understand that UNH owned Optumserve is a huge government contractor, actually they are a federal vehicle status contractor, which means government agencies do not require them to bid and hand them millions all the time. Optumserves has the contract for Medicare P1 Claim Integrity as well as Lewin consultants another part of OptumServe running CMS provider payment models, there’s AI hidden all over the Optum Code shops. They do have a huge code shop, IBM pays for the licensing of 40 Optum patents as an example.
So as usual, I look at these perceptions of a CMS staff of nothing but worker bees being able to do much of anything, they are pretty much run by OptumServe, who is big with the DOD today as well. Again, I put the images, links, everything, etc. where Otpum “brags” about all they do for the Feds…this is only one example of the AI they have, we have no idea what they have on their back end servers as they have 1000s of IP patents folks and hired Rick Hamilton from IBM a few years ago to work at Optum, who’s one of the biggest IP patent grantees in healthcare.
So no offense, but look and see who’s really running CMS and it is a fox in the henhouse as remember UNH supports 80% of AARP’s revenue and sells Medicare Advantage policies…I just wish folks would wake up to this fact as I keep telling all, it’s hiding in plain sight too.
Insurers know better
The feds may have warned insurers that AI can’t be used to deny patient care, but the insurers will prove to the feds that yes, it can be used that way.
If they have to, it will simply be an AI coworker giving advice. Or a computer using its trained guidelines to deny patient care in the ways it has been trained to do.
Medicare insurerers: “Hold my beer!”
This actually sounds like a good thing, given the outsized power wielded by geriatrics and their advocates as a political class.
The more we can do to reduce life expectancy, the better shot we have of getting elected those politicians who will actually care about younger folks and not focus on the gerontocracy.
Re:
…How do we get the AI to deny medical care to sociopaths instead of old ladies? Seems fitting.
Re:
“The more we can do to reduce life expectancy”
Life expectancy has decreased due to the pandemic and the poor handling of same by society in general but government in particular .. I’m looking at you MAGAts
I wish they’d said “By “can’t use” we don’t mean you’re not allowed to use it. We mean we won’t accept that it was used and will hold the person signing off on any decision as being the one responsible for that decision and take action against them and the company for failures to comply with the regulations. And if nobody signs off? We’ll treat that like we would any other decision made without proper evaluation.”.
But they'll just do it anyway.
Since when have corporations complied with regulations that are much cheaper to ignore?
You can’t arrest an AI for medical malpractice.
You can however arrest the person who had decided to put into place an AI that commits medical malpractice.