Securing the Brain Behind the System: Why AI Governance Is the Next Frontier in Patient Safety
By Michael Gray
We’re entering a new frontier in healthcare: One powered by data at every stage, from patient recordkeeping and diagnostics to population health and billing. The use cases and advantages are numerous and well documented. The risks, however, get far less airtime, leaving organizations vulnerable in ways they can’t yet imagine.
Imagine a hospital that relies on an AI-powered triage assistant, celebrated for its quick decision-making and life-saving pattern recognition that catches anomalies too subtle for the human eye. Now imagine a bad actor getting access to that system, inadvertently unlocking the same troves of HIPAA-protected patient records, clinician prompts, and hospital data that powers its intelligence.
This isn’t hypothetical. According to IBM, 13% of organizations have already experienced breaches of AI models or applications, and 97% of those lacked proper AI access controls.
As AI becomes central to diagnostics, workflows, and patient engagement, securing the models themselves must become a pillar of patient safety and quality assurance.
Why AI models are a new type of risk
Every new technology creates a new attack surface. Hackers know that shifts in the tech stack often open fresh vulnerabilities, especially during times of transition, when defenses are still adapting. And with AI, the risks are amplified by a lack of mature regulation or standards. This is where CIOs, CISOs, and other IT leaders need to act proactively, not wait for mandates to catch up.
In healthcare, models trained on patient health information (PHI), clinical notes, or imaging data represent a new and often overlooked exposure point. Even when core systems like EHRs are locked down, the models built on top of them can leak or distort sensitive information in surprising ways:
- Data leakage. Models can unintentionally return fragments of sensitive information during queries, exposing PHI to anyone interacting with the system.
- Model inversion. Hackers can use outputs to reverse-engineer and infer details about the underlying training data—essentially extracting private patient information without ever accessing the original database.
- Prompt injection. Attackers insert malicious instructions into prompts to override the model’s safeguards and manipulate its responses.
- Data poisoning. By seeding corrupted or false data into training sets, hackers can distort or manipulate model behavior and outputs over time.
These breaches don’t always involve direct data theft. In some cases, the greater danger is subtle manipulation: Altered recommendations, biased predictions, or corrupted decision support that erodes clinician confidence. The result is compromised care quality, misdiagnoses, or a loss of trust in AI-driven systems that healthcare increasingly relies on. In other words, protecting AI isn’t just about IT resilience. Instead, it’s about preserving patient safety, clinical integrity, and overall trust.
Common defense gaps in healthcare AI deployments
Healthcare institutions are no strangers to breach risk or data privacy concerns. Yet even in these well-practiced environments, clear trends in defense gaps have emerged.
Many providers are successfully tuning models on PHI, but few have documented data minimization policies that define exactly what patient data a model can access, how long it’s retained, and how it’s ultimately disposed of. Without that clarity, PHI can end up embedded in model parameters or logs, lingering beyond its intended use.
Others operate shared or centralized AI environments, often built for efficiency, where models trained on different datasets or service lines coexist in the same infrastructure. That creates unnecessary crossover risk, where a single compromised system could expose sensitive information across multiple departments or applications.
Others still lack formal AI access controls and audit mechanisms. In some cases, developers, auditors, analysts, or third-party vendors retain credentials or API access long after deployment. This absence of role-based controls directly conflicts with HIPAA’s minimum necessary standard.
As AI continues to move deeper into diagnostic and operational workflows, addressing these weaknesses is paramount.
Four pillars of AI security and governance in healthcare
While the risks may sound daunting, the good news is that most healthcare organizations already have the foundational pieces in place. The key is applying familiar data governance and compliance principles to this new AI context. Simply understanding what to protect and how makes healthcare systems far more resilient and action ready.
Here are four core pillars to build from today:
PHI minimization and data policy
One of the biggest ongoing risks is how protected PHI is handled after model deployment. PHI can persist invisibly inside models or logs unless clear policies govern its lifecycle.
To mitigate this, organizations should:
- Limit data used in training to what’s strictly necessary for the task.
- Fine-tune models so they are fit-for-purpose and isolated from broader data sources.
- Establish retention and destruction timelines in line with HIPAA requirements.
- Ensure all data used for model training or evaluation is properly de-identified, tokenized, encrypted, or pseudonymized.
These are natural extensions of what most healthcare organizations already do to protect data under HIPAA. Now, they just need to be applied with AI in mind.
Role-based access and segregation of duties
Applying the principle of least privilege is crucial. This is a cybersecurity concept that means that every level of access should be justified on a per-user basis—granting each individual, system, or process only what’s required to perform their specific role.
Equally important is segregation of duties, separating the teams that train, deploy, and validate AI models. This reduces insider risk, supports compliance, and ensures that no single user or team holds unchecked authority over a model’s data and outputs.
Segmented AI environments
The same concept of segmentation applies to your AI environments themselves. Instead of maintaining one large, monolithic model that spans multiple service lines, create smaller, purpose-built models or sandboxes by department or dataset: Radiology, claims, pharmacy, and beyond.
This approach limits exposure if one model is compromised and reduces the “blast radius” of any incident. It also simplifies auditing, since each environment can be governed and monitored independently.
Continuous monitoring and validation
Security doesn’t end at deployment. Implement automated, continuous logging and auditing for all model access and inference requests.
Monitoring should include:
- Alerts for abnormal access patterns or query spikes.
- Regular tests for data leakage or model manipulation.
- Periodic reviews of access credentials and permissions.
Proactive validation helps detect and contain potential breaches early—ideally before they affect patient care or compliance posture.
Taken together, these four pillars establish a governance model that balances innovation with protection. They also align closely with HIPAA and OCR expectations around access control, auditability, and ongoing risk assessment, helping healthcare organizations strengthen patient trust while staying ahead of emerging AI regulations.
Beyond compliance: Building trust and safety in AI-driven care
Compliance is an essential checkpoint, but it’s not the finish line. The ultimate goal isn’t to tick the checkboxes on regulatory compliance paperwork, it’s to achieve true security and sustained trust.
To get there, healthcare organizations should:
- Recognize that the goalposts move. AI technologies evolve rapidly, and so do the threats that accompany them. Your security posture must adapt just as quickly so protections evolve with each new deployment or integration.
- Protect the ethical and reputational imperative. Even when systems are secure, many patients feel uneasy about AI in their care. Communicating transparency, accountability, and safeguards helps build confidence in both the technology and the clinicians who use it.
- Embed governance across the AI lifecycle. From data ingestion and training to inference and lineage tracing, governance needs to be built in by design.
Getting AI security right protects people as much as, if not more than it does data. Ultimately, an AI model trained on patient data becomes part of the care team. It deserves the same vigilance, oversight, and ethical consideration that healthcare leaders apply to any clinician or system entrusted with patient well-being.
Michael Gray is CTO at Thrive. He has been a strong technology leader at Thrive over the past decade, contributing in the consulting, network engineering, managed services and product development groups while continually being promoted up the ladder. Gray’s technology career began at Dove Consulting and later Praecis, a biotechnology startup that was acquired by a top-five pharmaceutical firm in 2007. Serving in his current role, he is now responsible for Thrive’s R&D, technology road-mapping vision, while also heading the security and application development practices. He is a member of several partner advisory councils and participates in many local and national technology events. Gray has a degree in Business Administration from Northeastern University and he also maintains multiple technical certifications including Fortinet, Sonicwall, Microsoft, ITIL, Kaseya and maintains his Certified Information Systems Security Professional (CISSP).