The Future of AI in Healthcare: Transparency and Adoption

By Matt Phillion

AI is the top of mind everywhere right now, but it’s not without its known limits: researchers have found there are limits to how accurate ChatGPT can be, particularly in areas like diagnoses, while studies have also found that 60% of U.S. adults are uncomfortable with healthcare providers using AI in their own healthcare procedures.

But there are potential benefits, too. It’s estimated that AI applications could cut the cost of healthcare in the U.S. by $150 billion in 2026 and help alleviate challenges around staffing shortages.

How can we merge the good with the challenging—and is healthcare ready to fully step into the ring with AI?

“I see a lot of potential for AI to be used in a good way for diagnostics,” says Jenny Yu, MD, chief health officer at Healthline Media. “It can potentially ease the administrative burden impacting physicians and other healthcare professionals that’s contributing to burnout.”

AI from a diagnostic perspective can help predict disease, whether radiologically or other methods of detection, but it doesn’t eliminate the need for human interaction.

“If you are using imaging for lung or breast cancer, for example, there’s the ability to use algorithms, or look at pixelation on an image and build models that can detect tiny changes,” says Yu. “There are a lot of times where this can be a second set of eyes to make a call for a radiologist or pathologist using the images to diagnose the patient. When you aren’t sure, you can make a call in and can leverage predictive analytics to help.”

The human factor is still irreplicable, though, Yu notes.

“AI, machine learning, these are trained by humans, and it’s people who are gating it, developing it,” she says.

But things are moving fast, and while healthcare thinks a lot about ethics in its work, there needs to be a lot of conversations about ethics and regulation going forward.

“Things are moving fast but we need a parallel path for regulatory standards. We need to think about the intent and intention behind how we’re going to use this technology and these tools. We need standards, policies, and regulations to protect us from bad actors,” says Yu.

The benefits of easing administrative burden can’t be overstated, though. Not just in imaging or diagnostics but call center lines, preauthorization of drugs or procedures, refilling medications and many other pain points in the system automation and AI could alleviate.

“These generative AI models could almost serve as a digital agent that can help people find the right information,” she says. “But again, it needs to be built with the right intention, with proper checks and balances.”

Considering accuracy

While AI has its place, Yu notes that it has limitations when it comes to being a technology for hunting for knowledge.

“I just don’t think the accuracy is there yet when you’re searching for answers or trying to get knowledge for healthcare purposes,” says Yu. “Researchers have found that ChatGPT only achieves about 72% accuracy. When it comes to helping people make the right diagnosis, that’s dependent on input. I’m a physician and I know the questions to feed it to get more accurate answers, but I can imagine someone seeking answers who do not have expertise or training. They’re not going to ask it the right question, and when you don’t ask the right question you run the risk of the AI hallucinating.”

Generative AI rather than other forms of AI is a different use case requiring a nuanced view, Yu says, while other forms of AI have the capacity to enhance diagnostic capabilities for providers, generative AI “isn’t quite there yet,” she says.

The ethics question

Healthcare, in many ways more than other industries, considers questions of ethics very seriously. But is the industry better, or worse, prepared to answer questions of ethics when it comes to AI as a tool?

“I’d flip the question and ask: is healthcare even ready for the advances coming at them?” says Yu. “We’re still using faxes to communicate to share lab work, and phone conversations, paper, and pamphlets to educate patients. We’re not necessarily using the whole of digital technology to its full capacity yet.”

We’re seeing more and more technology, of course—apps for setting up appointments, online portals to view lab results—but healthcare can be slow to adopt technology on the administrative side.

“We’re not prepared from a data and infrastructure point to bring in AI within other technology that already exists,” says Yu. “One concern to call out is: how is AI going to integrate into other technological advances?”

Going back to the question of ethics: yes, healthcare thinks about ethics all the time in terms of decision making around a person’s life, but has not thought always thought about it in terms of digital tools, Yu notes.

“We need to gather the troops and have that conversation. Yes, we understand ethics in healthcare and we have boards and organizations for that perspective, but are we actually ready to have that conversation in the digital tool space?” she says.

Transparency and innovation

There’s no doubt AI will continue to work its way into healthcare. The industry is curious and innovative, and just like testing new drugs and devices, any option to potentially improve patient care will be examined.

“You get new information, digest it, get opinions, and then trial and test it. It’s the scientific method, and this what we apply to AI,” says Yu. “We look at digital tools from a system and administrative value perspective, but also what is the value to the patient? With any tool, AI or otherwise, you’ve got to put the patient at the forefront and think about what the value-add is. What is it doing to the standard of care? How is it adding value to the patient?”

This leads to the need for a conversation between all the stakeholders, be they payers, administrators, or practitioners, to lay the groundwork for what value AI offers in this space.

This also brings up the question of trust. As noted earlier, many patients aren’t fully trustworthy of AI in relation to their health and personal data.

“Transparency is key,” says Yu. “Oftentimes whatever it comes down to communication. If the industry doesn’t get the right message out, mistrust can happen. With AI, we need to be super transparent about how the technology is being used.”

This means making the patient part of the conversation.

“We can build all these things but sometimes we forget to involve the most important person, and that’s the patient,” says Yu. “It’s important to let them know that this doesn’t need to be a black box. We’ve got to help them understand the tool’s potential and reassure them and assuage any fears that they might have.”

Before introducing a tool like AI, there’s responsibility and accountability to educate the patient. Gone are the days when patients would just trust the system in an authoritative way; they are more educated, curious, and questioning than ever.

“It’s good that the patient questions and challenges and wants to learn and better understand their care,” says Yu. “And it’s on us to find the right language to help them understand those things. That’s what we do as a company, putting education at the forefront and focusing on empathy and transparency.”

There are ways to ensure data privacy, but it’s important for providers to explain how data is processed, ingested, and used so that it can’t be traced back to the patient in a discernable way.

“There are patients who don’t want their personal data somehow contributing to something less than good, or something they haven’t signed up for or consented to,” says Yu. “We’ve got to be transparent about it. That comes from seeing AI and other digital tools as a black box and not knowing how it works, which can make patients fearful. The more we educate them, the more I think people will be less fearful.”

AI has caught fire for many industries, but the next step Yu expects is for the fire to turn to a simmer and the focus to turn to adoption and acceptance to acutely use it for the right use cases.

“I hope this can be a good tool to help people solve some of the huge challenges within healthcare, whether that’s access, cost, transparency, the consumer experience,” she says. “I am excited for all of the possibilities, especially around the work being done on the diagnostics side that could help with early detection of devastating diagnoses.”

Matt Phillion is a freelance writer covering healthcare, cybersecurity, and more. He can be reached at matthew.phillion@gmail.com.