Google mortality score. Black box algorithm. Hearing benefits.
Help ya boi
Hello. If you have been enjoying this newsletter, do me a solid and send the sign up link to a friend. I have a bet with my boss about when we’ll reach 60K subscribers and I’d like to not lose.
The black box
In the autonomous vehicle world there’s a debate – should autonomous cars slowly introduce assisted driving features until we get to a fully driverless product, or should we ONLY roll out cars once they can be controlled without a human in the loop?
The question comes from the idea that if you introduce assisted driving slowly, drivers start paying less and less attention to the road because they think the car can handle it. These drivers then aren’t paying attention for the edge cases where the on-board computer systems expect humans to take over. So should we just wait until humans aren’t expected to be involved at all?
This brings me to an overarching point: I think one of our core issues as we move from human-based processes to AI-based ones is that we’re going to OVER rely on AI for decision-making before it’s actually ready.
Healthcare AI companies are introducing different levels of automation to the tasks they’re trying to solve for.
In the autonomous car example it’s clear that the end goal is no humans driving – how we get there is up for debate. But is the end goal of healthcare fully autonomous diagnostics, monitoring, etc.? And if so, are we going to face similar problems that the driverless car space is facing as it slowly automates different processes and over-reliance on AI becomes an issue?
I’m not sure there’s a consensus here, but it’s something worth thinking about.
The algorithm has blessed me
Here’s a healthcare example of over-reliance from this absolutely wild article about using an algorithm to allocate home care funds.
In Idaho, the state tried to use an algorithm to allocate home care hours and funds to pay for help for the severely disabled. People saw their funds drop by as much as 42% when the AI was implemented, demanded an explanation, and the state “declined to disclose the formula it was using, saying that its math qualified as a trade secret.” Eventually the case was taken to court, and it was discovered that the tool was relying on very flawed data.
This case highlights a few things.
1) How will auditing work when AI tools are deployed in the wild? This was not only an issue of bad data going in, but it took a court case to force an audit of the system. How are we going to make sure that data + algorithms are constantly tested to make sure they’re up to snuff?
2) How are we going to handle “trade secrets” when it comes to algorithms in healthcare? I get that an AI-based business is dependent on the associated data and algorithms, but if people can’t easily access that data, how can it be understood and questioned?
3) Who should the blame fall on when these mistakes are discovered? Is it the company that designs the algorithm? Is it the entity that uses and makes decisions based on the algorithm?
Clearly AI has a very important part in healthcare, but when the distribution of resources is dependent on a black box, people should have the right to know how a decision was made. As AI handles more and more complex sets of information, it’s going to be increasingly more difficult for a human to know whether or not the output is reasonable. Becoming over-reliant on computers as decision makers is a very real risk before they’re ready for widespread use.