The Healthcare AI Revolution that Isn't
By Mario
The internet is rife with sensational stories about how artificial intelligence (AI) will revolutionize healthcare. A quick internet search–or better yet, a query to Bard or ChatGPT–promises more efficient healthcare delivery to better predictive disease models to the “end of physicians all together.” If you believe everything you read we’ll be getting all of our healthcare from computers without any need for human touch.
These predictions are wrong, and there’s a long track record to prove it.
Admittedly, I’m not a computer scientist, and I know little about coding. But as an emergency physician, I know what it’s like to provide bedside care. From my perspective, it's the delivery of better clinical care where AI currently falls short and probably always will.
Starting with IBM Watson Health in 2013 to Epic’s AI sepsis algorithm to Verily’s recent diabetic retinopathy tool, the healthcare field is a graveyard of AI technologies that looked like they performed well in pre-clinical testing and ultimately failed when it mattered most. These failures haven’t kept smart people from making bold predictions, however. In 2016, deep learning pioneer Geoffrey Hinton - yes, the guy that resigned from Google this morning - predicted that radiologists would be obsolete within 5-10 years and AI would render the specialty pointless. Seven years later, the evidence suggests he was wrong: not only do we still have a shortage of radiologists in the country, but the relative geographic distribution of these specialists has also left rural America medically underserved. Additionally, several studies have shown that automated systems still do not perform up to par. For instance, AI technology looking for macular degeneration or detecting skin cancer hasn’t performed well in different light settings or when used on people with different skin tones. Good thing there was still a human clinician around…
Against this backdrop of skepticism, I recently read an interesting story about a new report compiled by researchers at Duke University who found that even though “hospitals [publicly] rave about artificial intelligence…on the front lines, the hype is smashing into a starkly different reality.” The story cited several issues that were discovered through user interviews but essentially centered on a few problems:
-
Caregivers noted that the AI models were often incorrect and did not help to drive decision-making in practice.
-
Hospitals and health systems are not designed to implement and maintain AI systems over time.
-
There’s too much heterogeneity in practice settings and patterns for the AI algorithms to perform well in anything other than idealized test settings
Some argue that failure is part of progress and the examples and problems detailed above, are still steps forward. That’s a fair point, but only if you have the environment around these systems to learn from the failures. And that’s where AI will always get stuck in healthcare. The fact is that hospitals and health systems don’t have the IT staff or the funding to fully design, build, implement, and maintain these systems over time. As the article above noted, these highly complex computer algorithms require specialized computer scientists to build and maintain. When they break down, the fix is often more than a simple patch and requires a deep understanding of how the systems were built. Some hospitals have tried to address that problem by spinning out private companies that can build tech more efficiently. The problem is that they keep running into regulatory hurdles with FDA and medical device approval pathways.
This problem is not isolated to healthcare. Education is another field where AI has been widely predicted to revolutionize the teaching and learning paradigm but doesn’t have the technical staff or the financial resources to support a comprehensive-AI solution. The same goes for social services and any other field that directly impacts human services delivery. Yet, the public narrative continues to be that these systems are going to change life as we know it. I’m skeptical.
So what do I think AI should be used for?
AI can play a valuable role in searching large amounts of disparate data to provide new insights for human beings to evaluate. AI can also play a role in highly technical drug discovery, analyze complex financial models, and provide test scenarios for hypothetical evaluation. It can also offer a new model for human/machine interface that can seem oddly human at times changing how we interact with our computers. But it’s important to remember that these AI systems are still based on models and machine learning algorithms. The concerns many have emphasized about these systems gaining an unintended level of awareness are unquestionably valid. But “awareness” is not the same thing as service delivery.
When it matters–whether it’s the diagnostic workup, the point of healthcare care delivery, or the broader field of human services delivery–I suspect we’ll always need an actual human on the other side.