“AI” (or more accurately language learning models nowhere close to sentience or genuine awareness) has plenty of innovative potential. Unfortunately, most of the folks actually in charge of the tec…
Here's a wild idea: make them publish the exact criteria and formulae used to determine coverage. Their decisions should be verifiable and reproducible.
They will add someone who's job it is to click okay to every decision the AI makes. Therefore the AI isn't making a decision the human always clicking okay is.
AI will deny the care after being rubber stamped by a doctor who graduated last in his class and this is the only job he can get, being a traitor for the insurance companies.
Yeah, sure, ok. We pinky promise not to use AI to generate leads that are then printed out on paper and put in front of a doctor's assistant's autopen for signatures denying insurance or coverage.
There is absolutely ZERO way to practically enforce this. An AI team can act like a black box, ingesting data and outputting hard copies that cannot be traced back to them. There is no way this will not happen.
"We'll audit the company!" -> they'll send the data to an offshore shell company that doesn't follow the law, then the recommendations will be sent back.
Medical directors do not see any patient records or put their medical judgment to use, said former company employees familiar with the system. Instead, a computer does the work. A Cigna algorithm flags mismatches between diagnoses and what the company considers acceptable tests and procedures for those ailments. Company doctors then sign off on the denials in batches, according to interviews with former employees who spoke on condition of anonymity.
“We literally click and submit,” one former Cigna doctor said. “It takes all of 10 seconds to do 50 at a time.”