Industry Focus—Culture of Safety: Just Culture…It’s Much More Than an Algorithm

This Industry Focus article on culture of safety is sponsored by Verge.

Editor’s note: In this except from the HCPro book Building a High-Reliability Organization: A Toolkit for Success, Second Edition, authors Gary L. Sculli, MSN, ATP, Douglas E. Paull, MD, MS, FACS, FCCP, CHSE, and David Sine, CSP, ARM, CPRHM, DrBE, look at how adopting a Just Culture can help healthcare facilities work toward becoming high reliability organizations. A Just Culture is a necessary component of a Culture of Safety. Visit http://hcmarketplace.com/high-­reliability for more information.

Note to reader:

Recently, I was in a conference room observing a hospital CEO talk to a group of frontline clinicians. He was talking about the facility’s journey toward high reliability and its adoption of a Just Culture. He said, “When I look at an incident report or some report of harm, the first thing that often pops into my mind is, ‘We need to fire this person. How could this happen?’ Then I step back for a minute and remember the principles of a Just Culture and the questions I have to ask myself. I now know that to discipline someone would be the worst thing to do, and most often, it really has to do with some problem in our culture or our system.”

I would submit to you that the statement above from a CEO is perhaps the most important metric when trying to determine whether a culture has changed or will change. Before anything can happen, organizational leaders must possess right-headed thinking on the topic of human error.

Although we can examine many elements in the chain of events depicted in this case study, let’s focus on the resident physician’s actions. As a leader, perhaps chief of medicine or chief operating officer of a healthcare institution, what is the best course of action to take with the resident in the wake of the patient’s death? No doubt the patient’s family was devastated when they were told that their loved one did not survive the surgery, which was, after all, the result of a grave and undetected medication error. Perhaps the news media has gotten wind of the story and aired a series of devastating reports about the quality of care in your facility (Headline: “Physicians at XYZ Medical Center Texting Friends Instead of Caring for Patients”). Perhaps you have been contacted by the family’s attorney, who is laying the groundwork for a lawsuit. When the resident realized that her actions led to the patient’s demise, she was distraught and at times inconsolable. In the days after the event, she felt as though her colleagues were silently questioning her competence. She felt alone, scared, and concerned about legal action and her future as a physician. The truth is that this is a scene that plays out all too often in many healthcare organizations. Specifically, the question here is, should the resident be disciplined? Should she be terminated? After all, it’s pretty irresponsible to be texting friends when you are supposed to be discontinuing medications. When one puts it that way, it seems that discipline is in order. However, this case is not that simple, and it may very well be that disciplining this clinician could be the absolute worst thing her supervisors could do, having devastating effects on the organization’s culture for years to come. Before we discuss a course of action to take here, let’s first discuss some important concepts about high-reliability organizations and human error.

Zero error and error management

In a high-reliability culture, a paradigm exists that simply states that it is not realistic to expect zero human error. Human error is ubiquitous; it is inevitable. As much as we dictate policy and guidelines, as much as we practice and train, humans will commit errors; it is a constant (Reason, 2008; Helmreich et al., 1999). Whether errors are in the form of cognitive slips (unintended failures of execution, such as a pharmacist selecting the wrong dose when filling a prescription) or mistakes (misapplying a problem-solving method that is part of our knowledge and expertise, such as a physician treating a patient for myocardial infarction when the cause of the chest pain is a pulmonary embolus), they will never be fully eliminated—never.

In the course of conducting a team training session with emergency department physicians, we asked the following question: How many of you believe that zero human error is realistic? Can this be achieved? In most cases, everyone comes to agreement that the answer to this question is no. In one particular case, however, a physician raised his hand and stated, “I believe that this is possible,” meaning that zero human error can be achieved in ongoing operations. He went on to explain that in the intensive care unit (ICU), the central line–associated bloodstream infection (CLABSI) rate was zero and had been for several months. Although this is a desirable outcome, it is not the point of the “zero error” question. What this physician is referencing is not error, but outcomes. While the CLABSI rate is zero now, it will not stay that way; there will be peaks and valleys. Also, we assert that in the ordering of the central line; insertion, routine care, and maintenance accessing the line; and discontinuation of the line, there were most likely minor errors made or almost made. What keeps the CLABSI rate low is not that these errors were eliminated, but that they were detected and managed.

A high-reliability culture accepts that human errors will occur. It’s fine to strive for zero error, but we must be realistic and understand that in a system where humans are interacting with information, equipment, and each other in high-stress environments, errors will happen. It’s better to think about them before they occur and be ready to catch them or respond to them effectively after they occur. The next paradigm that exists in a high-reliability culture is, therefore, that errors must be managed. Error management is critical to high-reliability operations. This means that in addition to the presence of technologies and equipment designed to monitor for and prevent errors from occurring within a system (e.g., lockout logic on a ­patient-controlled analgesia pump), humans learn specific behaviors designed to avoid, trap, and mitigate the consequences of error (Amalberti et al., 2005; Musson & Helmreich, 2004; Helmreich, 2000; Helmreich et al., 1999). In other words, humans apply these behaviors so that the errors within the system are either avoided, detected and remedied, or responded to promptly so the negative outcomes resulting from the error can be diminished. Error management builds a system that is fault-tolerant, meaning errors can occur but the system still functions successfully. *

References

Agency for Healthcare Research and Quality. (2014). Discontinued medications: Are they really discontinued? Morbidity and mortality on the web. Retrieved from https://psnet.ahrq.gov/webmm/case/325

Amalberti, R., Auroy, Y., Berwick, D., & Barach, P. (2005). Five system barriers to achieving ultrasafe health care. Ann Int Med, 142(9), 756–764.

Helmreich, R.L., Merritt, A.C., & Wilhelm, J.A. (1999). The evolution of crew resource management training in commercial aviation. Int J Aviat Psychol, 9(1), 19–32.

Musson, D.M., & Helmreich, R.L. (2004). Team training and resource management in health care: current issues and future directions. Harvard Health Policy Rev, 5(1), 25–35.

Reason, J. (2008). The human contribution: Unsafe acts, accidents, and heroic recoveries. Burlington, VT: Ashgate.