How Presbyterian Built a Governance-First Strategy for AI
By Christopher Cheney
Health systems are under increasing pressure to deploy AI in clinical care, but many organizations remain stuck in pilot mode and are unable to scale tools without adding complexity or risk.
For Presbyterian Healthcare Services (PHS), the challenge was not whether to adopt AI, but how to do so in a way that improves clinical decision-making without disrupting workflows or compromising patient safety.
The organization’s approach, anchored in governance, clinical validation, and a clear focus on operational problems, offers a roadmap for chief medical and information officers navigating an increasingly fragmented AI landscape.
Rather than treating AI as a standalone solution, PHS positioned it as assistive infrastructure within primary care, designed to reduce cognitive burden and support clinicians managing complex patients.
Presbyterian’s AI strategy began with a failed attempt to build decision support internally within its Epic EHR, an experience that clarified the organization’s core challenge.
The issue was not a lack of tools, but the growing complexity of primary care.
Clinicians were managing patients with multiple comorbidities while navigating fragmented data across labs, medications, specialist notes, and payer requirements. Reviewing charts and identifying care gaps was time-consuming and often inefficient.
There also were clinical problems that needed to be addressed.
“When I look at our patient population in the primary care space, patients can have as many as a dozen clinical problems,” Walker says. “For clinicians who work with patients who have multiple clinical problems, it is very time consuming to go through patient charts to review the specialists patients are seeing, lab results, imaging, and clinical notes.”
This experience reframed PHS’ strategy: Instead of building isolated tools, the organization needed a scalable way to synthesize patient data and support decision-making across multiple conditions within the clinical workflow.
Governance as the foundation for AI adoption
As an early adopter of AI in primary care, Presbyterian prioritized governance before scaling any technology, Walker explains.
The health system established a multi-layered governance structure that included:
- An AI governance committee
- Data access and security oversight
- Legal and compliance involvement
This structure ensured that AI adoption was evaluated not only for functionality, but also for safety, data integrity, and regulatory alignment.
“That’s one of the reasons why we started with a pilot,” Walker says. “We wanted to do our own clinical validation rather than just trusting AI.”
“Clinician feedback was one of the elements of our stepwise approach to the pilot,” Walker says. “With generative AI in the clinical space, we wanted to be cautious. From a patient safety and quality perspective, we wanted to make sure we were not putting any patients at risk.”
A critical differentiator in PHS’ approach was its focus on workflow integration.
Rather than introducing a separate tool, the organization worked to embed AI within the Epic environment in a way that aligned with how clinicians already practice.
“When I was helping design where the platform should sit in Epic, I relied on my clinical experience to ensure it would not be disruptive,” Walker says.
This approach reflects a broader strategic principle: AI adoption succeeds only when it reduces friction, rather than adding new layers of cognitive or administrative burden.
Why primary care was the starting point
PHS was intentional in choosing to adopt the AI platform to support primary care clinicians rather than clinicians in other care settings, according to Walker.
“For PHS, primary care is an area where we struggle with access to care. So, we were looking for an AI tool that could improve patient visit efficiency,” Walker says.
“We wanted to give primary care clinicians a robust AI tool before we expanded AI tools to other specialties.”
Over the long term in clinical care innovation, PHS views AI tools as assistive technology rather than replacing staff members.
“We are focused on clinical decision-making support because we are not near the point where AI tools are going to be making decisions,” Walker says. “We want AI tools to decrease cognitive burden for clinicians, reduce administrative burden, and increase the efficiency of conducting chart reviews.”
At this point, it is inconceivable that AI tools can replace clinicians, according to Walker.
“At the end of the day, clinicians are going to be the ones making medical decisions, with AI tools providing decision support based on information drawn from patient charts and clinical guidelines,” Walker says. “For the foreseeable future, clinicians will be in control and make decisions rather than AI tools making decisions for them.”
Lessons learned for CMOs and CMIOs
PHS’ experience highlights several key lessons for health systems seeking to scale AI in clinical care:
- Define the problem before selecting the technology: AI adoption should be driven by clearly defined clinical and operational challenges, not vendor capabilities.
- Build governance early, not after deployment: A structured governance model is essential for evaluating safety, ensuring compliance, and maintaining clinician trust.
- Start with a controlled pilot and validate clinically: Small-scale pilots allow organizations to test reliability, gather clinician feedback, and refine workflows before scaling.
- Prioritize workflow integration over feature sets: AI tools must fit seamlessly into existing clinical workflows. If they add friction, adoption will fail.
- Treat AI as assistive infrastructure: Positioning AI as a support tool, not a staff replacement, helps drive adoption and aligns with current clinical realities.
- Focus on high-impact use cases first: Primary care, with its complexity and time constraints, offers a strong starting point for demonstrating value.
Christopher Cheney is the CMO editor at HealthLeaders.