Human Factors 101: Improve Reliability in Healthcare with Human Factors Engineering

March/April 2010

Human Factors 101

Improve Reliability in Healthcare with Human Factors Engineering

Healthcare technology and training have advanced remarkably in the past 100 years, from the discovery of penicillin to the first heart transplant, but there is a downside to this progress. To quote Sir Cyril Chantler, former Dean of the Guy’s, King’s and St. Thomas’ Medical and Dental Schools in London, “Medicine used to be simple, ineffective, and relatively safe. Now it is complex, effective and potentially dangerous.”

Thanks to an uninterrupted series of remarkable advances in medicine, healthcare training in the United States is now among the best in the world, and we have access to the most advanced technology. Given those resources, why does the United States perform so poorly when compared to other developed countries on metrics such as infant mortality, life expectancy at birth, and attributable mortality? How is it possible that 44,000 to 98,000 Americans die from medical errors each year (IOM 2000)?

High-quality care requires much more than dedicated, well-trained providers with access to advanced technology. Healthcare is so complex and tightly coupled, no one person can anticipate all the errors that might occur. Providers continue to rely on the “weak aspects of cognition” (short-term memory, attention to details, vigilance, multitasking etc.) to get the results we want and that our patients deserve. We expect providers practicing on the sharp end of healthcare to do the right thing every single time and never make a mistake. Regrettably, that is impossible.

The Role of Latent Errors
As healthcare providers, we assign high value to personal autonomy and have the illusion that all of our actions are the result of free will. In fact, many of our decisions and actions are constrained by the environment and circumstances in which we find ourselves. When an error occurs, we often believe that the individual closest to the error chose that course of action, and we attribute blame to that individual. There is a natural tendency to assume that a serious error is the result of one serious mistake. Most serious events arise, however, from a series of flaws in the organization’s safety systems. These flaws may go unnoticed for years before they rise to a certain level, and someone takes notice.

James Reason, professor emeritus of psychology at the University of Manchester and recognized expert in human error, refers to these unnoticed flaws in our safety systems as “latent errors.” Dr. Reason notes that latent errors are constantly changing and moving, with new ones introducing themselves all the time. Given the right environmental circumstances, a number of these latent errors may combine with an active error (a slip, lapse, or mistake caused by the provider on the sharp end of healthcare) to cause a serious event.
We need to design our systems to help providers “do the right thing,” avoid active errors, and detect and eliminate as many latent errors as possible before they combine and cause a serious event.

Not only do we rely on weak aspects of cognition, in a hectic and chaotic environment, we expect our providers to integrate an ever-expanding information base into their practice. Each day, approximately 5,000 articles are published in 22,000 biomedical journals worldwide. Providers are expected to take this information, evaluate its validity, determine its applicability to their patients, and recall it instantaneously when needed. That, too, is impossible.

Human Factors Engineering
Improving the quality, safety, and reliability of healthcare requires a paradigm shift. Many of the approaches that will help providers improve their reliability have already been introduced in other industries. One of the most powerful is the application of human factors engineering (HFE). HFE is a science that studies how we interact with our environment and with each other and strives to optimize those interactions. It looks at ways to use the environment to help us to do the right thing. HFE works to optimize the design of our care processes, the design of our communication tools, and the way our teams function.

The aviation industry was one of the first industries to introduce the concepts of HFE. A highly visible example of its early application occurred in 1943 when Lt. Alphonse Chapanis was asked to investigate why pilots and copilots of P-47s, B-17s, and B-25s frequently retracted the wheels instead of the flaps after landing, causing the plane to collapse on the runway. Chapanis identified that the wheel and flap controls utilized identical, side-by-side controls that could be easily confused. To avoid this confusion, a rubber-tired wheel was attached to the end of the wheel control and a small wedge-shaped end to the flap control, which led to a significant reduction in these errors.

Anesthesiology was the first medical specialty to broadly apply many concepts of HFE to improve safety. From 1940 through 1980, there had been only small improvements in anesthesia-related mortality, with a mortality rate in the range of 1 to 2 deaths in 10,000 cases. With increasing national attention and the help of the American Society of Anesthesiology, The Anesthesia Patient Safety Foundation (APSF) was launched. Using the concepts of HFE, including gas ratio protection to prevent accidental shut-off of oxygen flow, adoption of oximetry capnography, and development of guidelines for difficult airways, anesthesia-related mortality has dropped to less than 1 death in 200,000 cases (Stoelting).

Psychology and Design
Much of HFE focuses on the psychological aspects of design. These are concepts that Donald A. Norman, professor emeritus of cognitive science at University of California, San Diego, and professor of computer science at Northwestern University, and Kim Vicente, professor of mechanical and industrial engineering at the University of Toronto, have written about extensively. Norman (1990) and Vicente (2006) state that there are two basic forms of knowledge. The first is head knowledge, information we contain in our human memory. Healthcare has focused on this type of knowledge, expecting providers on the sharp end of healthcare to maintain and assimilate all the information they need to provide efficient safe care every time. The second is knowledge contained in the world. This information is part of the environment, providing clues, such as color-coded gas lines in operating rooms, that help clinicians do the right thing.

Humans have an incredible ability to make sense of the world from simple cues. Much of this ability comes from our skill in matching things in our environment with previous experiences, helping us form mental models to use in similar situations in the future. We store rule-based scripts in our long-term memory of what to do when we encounter circumstances matching these mental models.

For example, when we see a child with a red, itchy skin rash appearing first on the abdomen and then spreading over the rest of the body, we use our rule-based scripts to identify chicken pox. This ability allows us to address complex situations and provide appropriate care. However, this process can also cause problems. Attempting to create mental models, we sometimes “cherry-pick” relevant data and rationalize contradictory data. We all have representations in our memory of what a quarter looks like but few of us could describe it in detail, such as which way George Washington is facing or where the word “Liberty” is located on the coin. The representation in our memory is only partial. Remember when the U.S. mint introduced the Susan B. Anthony dollar? Its characteristics were too close to the quarter, so when people looked at it quickly, they saw a quarter, discounting contradictory features.

In the clinical environment, imagine a 10cc vial of concentrated potassium chloride and a similar-looking 10cc vial of sterile water. And now imagine a clinician picking up the potassium, thinking it is saline. This mistake may involve multiple contributing factors, ubiquitous in our clinical environments, such as insufficient lighting, distracting noise, interruptions, insufficient time, and poorly situated medications, all of which may add to the likelihood of error.

As we develop our mental models, we can only conceptualize what we can see. This leads to an important rule of human factors engineering: make things visible. How many of us have become frustrated trying to operate organizational voicemail systems? The same button may control multiple functions depending on the sequence the buttons pressed, the functions are not visible, and the user is expected to keep those sequences in their memory. Add to that a cellular voice mail system and a home voice mail system, all requiring different sequences to activate the same functions. Why can’t the functions be visible on the phone itself, providing cues for users? Why can’t the functions be standardized across all phone systems? Think about how many errors have occurred in healthcare because things were not visible: IV misconnections, incorrectly programmed IV pumps, an automated defibrillator that won’t work because the shock button was covered and not visible.

Having things be visible while working in a complex healthcare environment can make all the difference in chaotic situations. As our patient populations have become more and more complex over the years, maintaining multiple IV infusions has become the norm rather than the exception. Several years ago when healthcare providers began having difficulty keeping track of which drip was attached to which tubing, some healthcare organizations began using heightened visualization by color coding the IV drip with its corresponding tubing. A nurse could visualize a green sticker on the IV bag and follow the tubing, which also was tagged with a green sticker, to the patient’s IV site. This reduced many medication errors.

Take a look around at the environment in which you work. There are probably more visual cues being used to help you do the right thing than you realize. And then again, in reviewing your organization’s event reports over the last 6 months, where could a visual cue have prevented an error or near miss?

This is the first in a series of columns about human factors engineering and its application to healthcare. In the next article, we will introduce the importance of good affordances and the appropriate use of constraints.

Brian Fillipo is the vice president for medical affairs at Bon Secours St. Mary’s Hospital in Richmond, Virginia. He may be contacted at Brian_Fillipo@bshsi.org.
Sherri Barnhill is the safety and quality coordinator for patient services at Yale-New Haven Hospital in New Haven, Connecticut. She may be contacted at sherri.barnhill@ynhh.org.

References
Institute of Medicine (IOM). (2000). To err is human: Building a safer health system. L. T. Kohn, J. M. Corrigan, & M. S. Donaldson, (Eds.). Washington, DC: National Academy Press.
Norman, D. A. (1990). The design of everyday things. New York: Doubleday.
Reason, J. (1997). Managing the risks of organizational accidents. England: Ashgate.
Roscoe, S. N. (1992). From the roots to the branches of cockpit design: Problems, principles, products. Human Factors Society Bulletin, 35(12), 1-2.
Stoelting, R. K. (n.d.). A brief history of the APSF. Available at http://www.apsf.org/

about/brief_history.mspx
Vicente, K. (2006). The human factor. New York: Routledge.