Web dev at the end of the world, from Hveragerði, Iceland

AI in healthcare

#AI

“Epic’s overhaul of a flawed algorithm shows why AI oversight is a life-or-death issue”

The ratio of false alarms to true positives was about 30 to 1, according to CT Lin, the health system’s chief medical information officer.

What’s interesting about this case is that when the AI tool for spotting sepsis was deployed as designed, as directed by the vendor, it was essentially unusable. Too many false alarms. Continuing to use it as designed would have killed people. The tool didn’t begin to be useful until they moved it off to the side and turned it into a monitoring assistance for a dedicated team that was responsible for alerting other teams that a patient might be developing sepsis.

What made this work were the human relationships. At shift changes, members of the sepsis AI team introduced themselves to the teams and nurses in the departments they were monitoring. People could build trust in each other, and the AI team could develop a sense for what was and wasn’t the AI’s strong suit.

To bring this home and connect it to how everybody is injecting AI into their software, productivity tools, and knowledge bases:

AI didn’t begin to have positive results until they stopped using it for automation or reasoning and instead used it as an assistant that was assumed to be very unreliable.

I suspect that almost every AI integration we’ve seen announced to date will end up being a mistake, except for possibly the copilot-style ones, and even those are still too unreliable for broad use. (In my personal opinion. Most of tech is justifiably going to disagree. Especially since you can tell from their output that their software doesn’t need to work or have any long term reliability for it to be considered a success.)

What seems especially risky is using these systems for automated decision-making and predictions. There they seem especially prone to taking potentially catastrophic shortcuts:

“Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy”

And regulation seems to lag reality, as you can see from this 2022 blog post:

“No Doctor Required: Autonomy, Anomalies, and Magic Puddings – Lauren Oakden-Rayner”

By calling the device a normal detector, it makes us think that the model is only responsible for low risk findings.

And:

We need to we regulate these devices not based on what they are called or what cases they operate on a majority of the time, but based on the worst mistakes they can make.

For more of my writing on AI, check out my book The Intelligence Illusion: a practical guide to the business risks of Generative AI.

You can also find me on Mastodon and Twitter