Considering Trust When Implementing AI in Healthcare
By Matt Phillion
As healthcare addresses the adoption of AI, organizational leaders are looking at how they can continue to innovate without risking patient trust, clinical safety, or care equity. While AI promises to reduce costs and improve efficiency, surveys show that both clinicians and patients remain wary.
According to two surveys from 2024 by the American Medical Association and the University of Minnesota, over 40% of doctors and two-thirds of patients have concerns about privacy, harm, and the erosion of the doctor-patient relationship.
There are lessons the industry can take from previous technological rollouts, like physician order entry, health information exchanges, or even preparing for Y2K, says Jim Younkin, senior director at Audacious Inquiry, part of PointClickCare.
“What those have in common with AI projects today is that you really need to put good governance together, and I think that might be where people are skipping steps and falling down,” says Younkin. “AI moves fast, governance moves slow. In healthcare, that’s not a bug, it’s a safety feature. The goal isn’t big-bang disruption. It’s careful, iterative trust-building.”
The right governance and framework help set the foundation for proper implementation.
“Getting the right people at the table and understanding what kind of ROI you’re trying to measure is important, but more importantly you need to know what kind of guardrails you want to have in place,” says Younkin.
This includes having clinicians participate in the process and have their say.
“Otherwise, you’ll have mega-distrust of the system overall, and even then, that doesn’t mean they’re going to trust it, nor should they,” says Younkin. “You gain trust through transparency, checking the receipts, and understanding that those proper guardrails are in place.”
Don’t forget to involve patient advocacy as well, Younkin notes.
“This isn’t just pulling 10 folks in off the street. There are people who regularly represent patients in these matters who have really good perspectives on things,” says Younkin. “In healthcare in particular, with artificial intelligence, this is particularly important because we know patients are more comfortable with AI being used for administrative tasks, helping the organization know their patients better, and that this can help create a more personalized experience for the patient. But from a clinical perspective, patients still expect their doctor to be the one making the decisions, not some faceless robot, collating data.”
Addressing concerns from clinicians and patients
Many of these same issues came up when the transition to electronic health records was happening, Younkin notes.
“Physicians are relying on data and things coming up in the record—and as they are presented, when you see something on the screen in black and white, you’re more likely to rely on it as truth,” he says. “I don’t necessarily want to turn clinicians into a really suspicious group who don’t trust their data, but we have to give them a reason to trust their data.”
Younkin likens new technology in healthcare to a plane taking flight.
“There’s a lot of legwork your ground crew goes through to make sure that plane is safe to take off,” he says.
That level of care is needed in the healthcare space, Younkin says.
“Having a good framework in place for developing and delivering AI content is going to be hugely important,” Younkin says. “But you also want to have a good ongoing education process for clinicians. You don’t want to run the risk of having one side of the continuum of care that trusts the data completely and the other side that doesn’t trust it, no matter what. I don’t think either way is a good way to look at things.”
Younkin points out that we already have a concept in AI that can be used to balance this.
“We have tools in AI to be able to deliver confidence scores,” he says. “You want to provide information to the clinician about possible matches or relevant information from the record, but ideally, you do it in a way that the physician understands there’s, for example, an 80% likelihood this is the case.”
How you present that information can allow the clinician to take control of the decision.
“There’s a big difference between whether it’s 60% likely or 90%,” says Younkin. “But even more important is when AI hits that uncertainty threshold—say below 70% confidence—it should acknowledge that limitation rather than guess. False confidence is more dangerous than honest uncertainty. The smartest AI isn’t the one with all the answers, it’s the one that knows when to pause and say, ‘I need more information for a reliable assessment.’”
This involves more from the administrative perspective than a clinical one. For example, a chatbot examining a body of information in the background and identifying certain documents that can be used to answer a question with a measure of confidence.
“At the end of the day, AI is just a probability machine,” says Younkin. “There should be ways you can take what’s happening behind the scenes and turn that into a confidence score.”
Accuracy and ROI
Younkin and his team have met with working groups to talk about their day-to-day work and what kind of AI or automation tools might help them to streamline their work.
“What is something you’d want at your fingertips rather than having to go off and spend time researching,” he says.
They then took those categories of requests and asked the groups to assign them a pain score.
“It’s subjective, but it gives us an idea of how much time it takes them to do this particular activity, what kind of cognitive load it creates, how hard it is for them to get to the bottom of things, and how often this thing happens per day, per week, per year,” he says. “We were able to look at 150 or so workflows, put them in order, and say if we start with these five or 10 representing the greatest pain points, they are the greatest opportunity for improvement in an organization.”
This can be done in healthcare organizations as well, Younkin explains.
“Get the right people at the table and talk to them about the areas they have challenges with, and how technology can help solve some of those things from an administrative perspective,” he says. “Think about how much of healthcare spending is spent on administration.”
There’s a tremendous opportunity to open up minds about what can be done with healthcare data. But it requires a different level of understanding and care compared to other industries.
“Healthcare is unique. I’ve seen this over the years, such as during interoperability discussions. Why can’t we exchange healthcare data the way we can in banking systems? But at the end of the day, banking and financial transactions are numeric systems, but in healthcare it’s multi-modal,” says Younkin. “We have so many kinds of data that it’s staggering. It’s not just what’s in the healthcare record, but your EKGs, your radiology test results, any type of test that is performed using special devices. They each generate different kinds of output. So, we shouldn’t be surprised it’s a challenge.”
As the healthcare industry looks to innovate with AI and beyond, it’s helpful to look to lessons of the past.
“I think as much good as the electronic health record did, it also tore down a lot of the physician-patient relationship,” says Younkin. “I think if we do this right, we can come up with ways that enable physicians to be able to focus 100% on the patient and the time they have together and be confident in the data that’s being collected in those visits and collated. I think it’s just a matter of time as technologist work with clinicians to piece this together in a way that can bring back human interactions, and that’s where the healing begins.”
Matt Phillion is a freelance writer covering healthcare, cybersecurity, and more. He can be reached at matthew.phillion@gmail.com.