Healthcare and Cybersecurity: Math Always Favors Attackers

By Matt Phillion

Data breaches in healthcare aren’t new—and time between breaches making headlines is never long. The industry still continues to try to understand why it remains so vulnerable, how these breaches happen, and why constant vigilance remains the key to keeping patient data safe and staying out of the news.

The numbers work against organizations and in favor of bad actors, says Jeff Macomber, CTO of SightView and former chief architect at ModMed.

“This is true for all of cybersecurity, not just healthcare: A company protecting its assets needs to be perfect, but for an attack to be successful, there only needs to be one mistake,” says Macomber. “One mistake can get that door cracked open a tiny bit and get them into your network.”

Perfection is a hard ask for humans, Macomber explains.

“Healthcare in general is a human practice. You’ve got front desk people, office folks, billers, there’s a lot of people involved in the giving of care inside and outside the patient experience, and they all need levels of access to protected health information to take care of those patients,” he says. “But an attacker can send 100 emails to a single medical practice in a single week, and the attacker just needs one person not to be paying attention to allow them to install malware and start an attack or an extraction process.”

Bad actors have had to evolve with the times. More PHI is hosted on the cloud and levels of protection are much higher than they were in the past. Where once a successful attack could allow access to your entire system or database, modern techniques make it nearly impossible to get access to everything. There are better access controls in place, as well. But on the flip side, the cloud has become a huge repository of data.

“So if they can get in there…and they do. The one common path is a misconfigured S3 bucket on AWS or another cloud service,” says Macomber.

But attackers continue to use a variety of tried-and-true tools and tactics to make their way past cybersecurity defenses that are profoundly human-based: email, text messages, escalation through social engineering.

“Those are human tasks, and if you can get to the right person,” you can get in, explains Macomber.

Making matters worse, AI has become a helpful tool or attackers to use these tactics at scale. They can automate an email attack that responds with a believable series of follow-up messages that create a backstory and can catch the human target when they’re not paying attention.

“These messages can be tailored: We’ve researched him, we know where he lives, and we can start to encourage a conversation in a way that doesn’t raise red flags the way it used to,” says Macomber.

Leveraging the same tools as attackers

A recent data breach involving an EHR and its partner has opened up discussion about what modern cyber risks for healthcare data look like.

“Partners are a great way to get access to other peoples’ data. It has happened with airlines, with e-commerce sites, and others, where partners have exposed a lot of data,” says Macomber.

There are processes in place to vet vendors and partners, and processes and policies to make sure that partners only have access to the data they absolutely require, but partnerships can be a major risk vector, Macomber explains.

“We do our background checks, but at a core level, there is an assumed level of trust that is automatically granted between two existing companies, and we don’t know what the result of that will be. There’s always risk,” he says.

There are protective measures organizations can have in place to help mitigate risk, whether they be limiting access to specific time frames, or automating alerts that detect when you are about to make data public that should not be.

“But a lot of it comes down to that human element,” Macomber says. “It’s always going to be the weak link.”

But to a certain degree, the tools being used to attack healthcare organizations can also be leveraged to protect them, he notes.

If they have an AI toolkit that attacks in a certain way, we develop an AI logic that detects when something is not behaving as a human should in our system,” Macomber says. “The base of that technology has been around a long time, but we have to make those tools smarter.”

Organizations can and should also adopt what Macomber calls “nesting dolls” of security.

“It should be that one mistake won’t get them in. It would require a series of faults to get all the way into the system,” says Macomber. “Humans are going to make a mistake, but it’s the breadth of that mistake that we can control.”

We need to be able to visualize what humans are doing in the system and how to detect what isn’t human, he explains. The industry also needs to address the weight of its own technology.

“Healthcare has a problem, and that’s drag on technology,” he says. “So much of the tech we use today has been in use for decades. The math isn’t there for doctors’ offices to be individually updating their tech on a regular basis; the costs are too high.”

An unexpected upside to consolidation of smaller practices is the opportunity to have a centralized CIO or CISO with knowledge that extends beyond HIPAA who can extend their IT positioning.

“One of the big risks in general is most offices are self-hosting their own IT infrastructure but don’t have IT staff and can’t keep up. Those mistakes compound over time,” says Macomber. “They may have a bunch of zero days unpatched, and they haven’t been attacked yet because they’re not a big enough target or by luck alone. If they don’t take security seriously enough, they’re going to pay for it eventually.”

The impact on staff

In healthcare, Macomber notes, everyone is always in a hurry: Always on the clock to see the next patient, get the next referral sent out, the next prescription filled. Helping staff understand cyber-risks boils down to at least a two-pronged approach, or maybe more, he explains.

“One is training. Simulated attacks, for example. That’s relatively low risk because it’s not going to have any negative consequences but it puts security at the front of your mind,” Macomber says.

Training should be annual at a minimum, and frequent enough that the lessons learned are always top of mind for the end users.

The second prong is improving your layers of security.

“Moving to things like Yubikeys or physical security systems, things that are one step further than multi-factor authentication,” he says. “This provides that extra layer of underlying cryptography bound to a specific piece of hardware.”

The industry has a few competing priorities that stand to make healthcare less secure, without the right preparation.

“Interoperability is a huge point that is great for health IT. We can exchange information in interesting ways, but it is going in two directions. One is more patient approval or giving them the ability to remove themselves from sharing information, and some of that is perfectly good for privacy reasons. You don’t necessarily need your eye doctor to know about mental health or OB-GYN care,” says Macomber. “But on the flipside, that’s making it more complex, and complex is the enemy of security. Anything that’s too complex becomes insecure by accident.”

In some ways, interoperability is a double-edged sword, as organizations need to filter based on preferences of each patients, often at scale.

“It’s much more of a complex, spider-web like system where you can’t just point at it and say it’s secure like a locked box. Now that box has holes all over the place. And what’s the appropriate amount of security to patch those holes?” says Macomber.

There are ways to address this, but whatever option an organization chooses, it’s going to take either more work, or more money.

“Either staff at an aggressive level, or use tools that look at the issue holistically,” says Macomber. “And that holistic tool is a cost saving, but how consistent is it? If you have 100 ways of connecting your system and you secure 99, you still are at risk. Every system in the world is driving toward complexity and the risk is that it becomes too complex and you lose control. Use tools you understand, not just something you can throw at the problem.”

Matt Phillion is a freelance writer covering healthcare, cybersecurity, and more. He can be reached at matthew.phillion@gmail.com.