Addressing the Divide Between the Promise of and Trust in Healthcare AI
By Matt Phillion
New research from Sage Growth Partners reveals a growing divide between the promise of how artificial intelligence (AI) can benefit healthcare and how prepared hospitals and health system leadership are to implement it.
While many leaders consider AI the top technology to watch at the moment, most reported low levels of trust in current solutions and noted a lack of concrete integration strategies.
The study found that:
- 83% of executives believe AI can improve clinical decision making
- 75% say it can reduce operational costs through better efficiency
- 67% report that they are investing in AI solutions to streamline administrative operations
- 57% say that AI-based clinical solutions are among their top five tech priorities over the next two years
Contrasting this is significant hesitation about AI use:
- Only 13% have a clear strategy for integrating AI into their clinical workflows
- Just 12% believe today’s AI algorithms are robust enough to rely on
- Just 10% say their organizations are aggressively pursuing AI today
- 49% cite appropriate use of AI as one of their top three current challenges
The survey, The Healthcare C-Suite’s Take on AI, was conducted among 101 executives across integrated delivery networks, academic medical centers, and independent hospitals.
“AI solutions have been steadily growing from the administrative perspective, automating manual tasks in areas like revenue cycle management, but it was nice to see that clinical solutions were jumping in priority,” says Stephanie Kovalick, Chief Strategy Officer with Sage Growth Partners. “We’ve seen clinical AI adopted in traditional formats, like reading radiology images, but we’re starting to see interest in generative AI, as well.”
Kovalick notes, however, that with rising interest is also an equal amount of healthy trepidation.
“There’s concern that it’s not ready for prime time, which could lead to patient safety issues,” she says.
Patient safety needs to be the focus when using this technology.
“All the time we’ve been doing this, patient safety never comes first. Even in this report, in previous years safety and quality were not front and center,” says Dan D’Orazio, CEO of Sage Growth Partners. “I think everyone in healthcare shows up to be safe and to provide high quality care, but I don’t think the system is designed to make that easy. When money gets involved, focus on safety decreases. I think everyone is committed to patient safety, and people see the promise of AI but they’re also trying to figure out how much we can trust it and how much agency we can give it. Ninety-five percent of enterprise AI deployments have failed. There’s hope, but also an equal amount of caution.”
D’Orazio compares it to the advent of the electronic health record (EHR).
“Look how long it took us to optimize that. Optimizing in healthcare is very hard,” he says. “I don’t think we’re on the same trajectory as the EHR, but it’s still early.”
“I think it’s about the use case for AI,” says Kovalick. “Traditional uses like reading mammograms are tried and true and prudent. It’s primarily generative AI that raises concerns, specifically around hallucinations. At what point do you get to the right answer? You have to ask ChatGPT the same questions multiple times to triangulate the right answer.”
How to build much-needed trust
Vendors tout generative AI as a symptom-checking solution, but that isn’t the case, Kovalick points out.
“How do you start to regain trust when you’ve had these solutions make claims and fail?” she says. “It’s about having a level of confidence it can deliver the right doses, recommend the right medications, and at that point it’s not only a patient safety issue but a malpractice issue.”
“One of the best ways to make healthcare safer is to spend more time providing care and less time administering and paper-working,” says D’Orazio. “We’ve heard this on calls recently. For the first time, with ambient listening, doctors are spending more literal face time with their patients. And we’re seeing patient satisfaction scores going up. In this case the machine can work for the people, not the people working for the machine.”
While it changes the way the provider and the patient interact, removing time and cost burdens, we also need to consider how AI allows that human connection.
“You really have to know how to use these systems, and you do need to be skeptical,” says D’Orazio. “Clinical experience is critical here.”
It’s also worth looking at how the technology can impact the ongoing issue of a shrinking workforce and provider burnout.
“That in itself is a huge safety problem. What can we do to support those providers?” says D’Orazio.
“That is one of the promises of the technology: Alleviating workforce issues and getting clinicians back to the patient more,” says Kovalick. “There are two pieces to consider: education and awareness. How can we make sure providers use the technology without going down the wrong path, and how do we make sure we understand the data that was used to train the algorithm?”
By nature, clinical data sets are inherently biased, Kovalick notes.
“While I would never suggest clinical research is at all biased, considering the populations included in many data sets you have to consider that those datasets may come with a unique, innate bias. How do we adjust for the fact that the data is going to have some level of bias?” she says.
There’s room for technology to help improve data quality to get it into the hands of human clinicians, D’Orazio explains, but it must be built in the right way to be successful.
“We have really poor data quality in healthcare,” he says. “That’s what these algorithms are sitting on top of. The models are being built on top of models. What’s the traceability? Something like 4,000 to 5,000 peer reviewed healthcare studies come out every day. How can anyone keep up with that? We’re not getting the benefit of all these insights without a machine, but the question is, where does the machine start? How is it integrated into the workflow? Do we trust that? A low number of clinicians trust the clinical quality of the output and I think that’s appropriate. If someone said they trusted it now, I’d be scared. The models are changing so quickly that what happened yesterday doesn’t matter today.”
It’s an age-old set of questions, D’Orazio says: What is quality, who defines it, how do we report it, and how do we track it?
“Healthcare is still very human-centric. It’s not a widget. There are a wild number of inputs we work with,” he says.
“There is a lot of data, and when you start to aggregate it, it’s the technicality of it: the file formats are the same, but when you look at a big data set, at all the patients, and all the claims…most clinical data is hard to read because it’s free text, unstructured, within the EHR,” says Kovalick. “Moving data is easy, but connecting data on the other end to make it a meaningful, cohesive data set is not.”
The more people use the technology, D’Orazio explains, the more they are going to see the upsides and downsides.
“The more you engage with it the more you realize the power along with the perils,” he says. “The technology is way ahead of how humans understand it and trust it, but as we use it more every day it changes the way you work. On the one hand, if you don’t ask it the same question five ways and you don’t have the experience to assess that you might think you get the right answers on question one. But on the other hand there’s the basic stuff: How do you keep up with 50 million clinical studies and surface something for the human to unpack? I think our understanding, appreciation, and comfort will grow along with or skepticism and discernment.”
It’s a question of how it’s utilized and when and training people to make the best of it.
Kovalick sees great potential for AI to help in the EHR space, if it’s implemented in a way that doesn’t simply bolt it onto existing technology.
“It’s time for a disruptor in the EHR market,” says Kovalick. “We’re layering AI solutions on top of existing data versus it being built in. It’ll be interesting to see how the level of trust changes when you have a system that’s built on AI as a foundation rather than layering it on top of digital filing cabinets.”
“It’s different than anything I’ve seen in my professional life,” says D’Orazio. “It has tremendous power and tremendous peril, and we need to understand where those connect.”
Matt Phillion is a freelance writer covering healthcare, cybersecurity, and more. He can be reached at matthew.phillion@gmail.com.