Healthcare and AI: Enhancing Care Through Emerging Technology

By Matt Phillion

AI is certainly the topic of the moment: With generative AI systems like ChatGPT top of mind for so many organizations, it’s a topic that can’t (and shouldn’t) be ignored. But it’s also important to remember what different types of AI technology do and assess how to best leverage these technologies safely and smartly in the healthcare setting.

These new large language models (LLM) can be used to improve natural language processing (NLP), but NLP itself isn’t something new—it’s a branch of artificial intelligence that has been around for decades. Medical NLP is a more recent addition to the conversation and might appear to be useful for ingesting and summarizing the massive amounts of patient data available in the EHR—if used correctly.

Dr. Tim O’Connell is CEO of emtelligent, a medical NLP company. O’Connell, a trauma radiologist, started the company with others to solve a specific problem that much of today’s discussion about AI has the potential to resolve: the data overload that clinicians must contend with in each new case.

“I’m an emergency and trauma radiologist, and at a busy site, I might read 50 CT scans and 100 x-rays in a shift,” he says. “All I’m given is that the patient has right lower quadrant pain, and I don’t have the time in the emergency setting to read through all of the patient’s prior history. I wanted to have something that could summarize all those reports, the known diagnoses, as a starting point as I read their current scan.”

Clinicians in all areas of medicine are faced with volumes of unstructured text they need to parse out to do their jobs.

LLMs, generative AI, and dedicated medical NLP solutions all offer different applications that can potentially move the needle. Healthcare is well positioned to make this happen.

“The great news is we’re well poised because a lot of legislation happened in the 2000s to help people implement electronic health records systems,” says O’Connell. “Now we’ve got these EHR systems, and they’ve been around for 15 years or more, so we’ve already got our health data mostly organized. There’s no analog-to-digital conversion that might have otherwise been necessary.”

The maturity of the EHR is both a benefit and a challenge, however.

“The EHR is a big filing system. You can go in there and find pathology reports, radiology reports, SOAP notes. We’ve amassed all this data. Fifteen years and hundreds or thousands of records for each patient,” says O’Connell. “And as clinicians, we’re in charge of synthesizing this. We need a tool to present it to us in usable ways. Medical NLP powered by machine learning and LLMs can extract, normalize, and contextualize unstructured medical text at scale.”

Different specialties are going to have different needs from that tool, he notes. What a psychiatrist needs will vary greatly from a family doctor. So there’s a need to be able to slice up the data in a way that’s most useful to the clinician using it.

“One of the use cases is that we need it to take these complex reports and turn them into patient-understandable reports,” O’Connell says.

But this is just one of the many use cases for medical NLP, he says. Medical NLP provides a balance between recall and precision. Clinicians need the right amount of data that is both accurate and relevant for the case at hand; this technology can strike that balance, O’Connell explains.

“The key is that it links information back to the original data in the patient’s chart,” he says.

Physician augmentation, not physician replacement

The focus needs to be on making the clinician’s job easier and better rather than take the job out of the hands of a human doctor, O’Connell says.

“The wrong way to think about it is physician replacement, and the right way is physician augmentation,” he says.

O’Connell mentions a famous comment by Geoffrey Hinton, considered the godfather of neural networks, who said in 2016 that we’d no longer need radiologists as AI would eventually replace them. Hinton has also famously recanted the statement since then.

“What we do as a model is a detection task,” says O’Connell. “We’re looking for a small detail, a white dot in a sea of white dots. Humans doing a detection task face a significant limitation, and what I want is an AI looking over my shoulder—for example, ‘You missed what might have been a nodule’—and I can take that output, look at it more closely, and assess it. This augments my role as a physician and lets me do my job better.”

This applies to other areas of medicine as well. An AI tool could help physicians know their patient’s histories before they walk in the door by offering a concise record of the patient’s medical history.

These tools aren’t limited to looking at the past, O’Connell notes. In the right circumstances, a model that can look at surgical reports, patient history, and genetics could be used to identify patients who are predisposed to certain conditions, so physicians can get ahead of health issues before they surface.

This could apply not only to individual patients but patient populations in terms of medical research.

“I was talking to a company who said we have 1.5 billion patient records. When you get to numbers that high, it’s hard to truly look at it all. There are a lot of problems, diseases, outcomes prognoses, diagnoses, and the answers are in those records,” but there aren’t enough hours and professionals to parse out all of them, O’Connell says. “This is where medical NLP steps in to provide the right amount of information to the clinician or researcher while linking back to the data in the record.”

Proceeding with necessary caution

With these last two use cases, there arises a very complex issue around consent, O’Connell says.

“We need really strong guardrails on consent”, he says. “What is informed consent for use of patient data? We need to be very careful and respectful of the ethical and legal issues involved.”

We know there are issues around copyrighted images and text being used in generative AI right now, so we need to consider the requirements and regulations in healthcare to protect patient rights, autonomy, and privacy.

“There’s great technology out there for anonymizing patient records, but it’s one thing to take the name out, but what about demographic information? Age? Gender at birth? This is really important. You need the information that is clinically relevant, you want to anonymize it, and you don’t want to expose the patients to privacy loss,” says O’Connell.

The governments of the world are going to have to step up and weigh in here, he explains. Fortunately, recent work by the U.S. and other governments show that privacy and AI are on their radar.

“I’m really encouraged by some of the latest regulations, acts, and laws that are coming out in the U.S. that actually show that the government understands this technology,” says O’Connell. “I’m happy the government is on top of this, but I think it’s going to need great people in academia, in the industry, and more to work with the government to help them stay ahead of the curve. There is danger here and we need regulations around it.”

O’Connell thinks we’ll potentially see changes in the next 24-36 months around better clinical applications for end uses that can make these technologies more interactive and useable for human users.

“Data sets are being made available that can enable new applications for things we haven’t even thought of yet,” says O’Connell.

Specifically, that idea of predictive medicine holds a lot of potential. If an ounce of prevention is worth a pound of cure, being able to use a predictive model for patients who consent to it to look for risks of cancer or heart disease can help get ahead of future diagnoses.

“Ideally, medicine should be aiming for that,” he says. “We’re already trying today but in a very analog way with family histories and so on.”

How can healthcare professionals stay educated and be ready for how this technology may evolve and advance in the coming months and years?

“My hope is that my colleagues maintain a gimlet eye toward this so that they’re prepared to say, ‘No, this isn’t a good idea,’” says O’Connell.

The adage in tech is often move fast and break things, he notes, but in healthcare that mindset can put patients at risk, so moving with caution and intent is pivotal.

“Often people will have access to cool new tech, and when you have a hammer, every problem looks like a nail,” says O’Connell.

Smart implementation of new technology can be a game changer, but healthcare professionals will benefit from understanding the ins and outs of new technology, so it can be applied the right way in the right places—and not implemented in the wrong places.

“We never want to sacrifice the human element of medicine. We need to keep that core idea of privilege in mind. It is a privilege to be a provider, and we never want to give up that privilege to a machine,” he says.

Matt Phillion is a freelance writer covering healthcare, cybersecurity, and more. He can be reached at matthew.phillion@gmail.com.