November/December 2010


Evidence-Based Medicine Doesn’t Preclude Common Sense

If you went skydiving, would you first ask for scientific evidence from a randomized trial that a properly functioning parachute prevents injury before you’d consider using one during your freefall? Hardly. In fact, no such study exists (Smith & Pell, 2003). Of course, some people without a parachute have survived a freefall from extraordinary heights without injury, and others have sustained injuries even when using a parachute. But it’s clear that you’d use a parachute when skydiving, even without a single randomized trial proving its effectiveness. Yet, when it comes to medicine, clinicians may be reluctant to employ any intervention absent rigorous scientific evidence regarding its efficacy.

Evidence-based medicine. This need for rigorous scientific evidence evolved from a history of medicine that’s littered with practices that were later abandoned after scientific scrutiny showed that they were ineffective, perhaps even harmful (Leape, Berwick & Bates, 2002). As such, we are among the many who would agree with evidence-based medicine. However, when it comes to patient safety, there are significant obstacles to this approach.

Limited research on patient safety. Error prevention is still a new field that has attracted just a fraction of the funding of all medical research performed today. Thus, you’re likely to find rigorous scientific evidence related to clinical interventions, drugs, and devices used to prevent complications from care that are not associated with errors. But many obvious error-reduction strategies are noticeably absent in available research. Conversely, if you applied only evidence-based safety interventions, you could end up with an ineffectual safety program that, perhaps, focuses on safety issues of lesser importance than those that are problematic in your organization.

Feasibility issues. Obvious ethical and recruitment difficulties preclude a randomized trial of parachute effectiveness; similar problems exist for some patient safety interventions. After all, who would allow themselves or their family member to be randomized into a control groupbe it freefalling without a parachute or being the recipient of a prescription using an abbreviation like “U” for units, each with anecdotal evidence of causing harm. Moreover, an institutional review board would never approve either study. The incredibly large scope of a study that could prove efficacy might also be a limiting factor. Take the safety practice of requiring a leading zero for doses less than one (Leape, Berwick & Bates, 2002). Perhaps only 1 in 100 clinicians will misread the dose as a whole number if the leading zero is omitted. Of those, maybe 1 in 5 reach the patient, and 1 in 10 of those errors cause significant harm. It would be incredibly difficult to carry out a controlled study of sufficient size to prove that patient harm is reduced when using leading zeros. More to the point, is such a large and costly study needed if experience tells us that leading zeros reduce the risk of errors, some of which have caused significant patient harm?

A more balanced approach. In the end, a traditional evidence-based approach cannot be your only source for advancing patient safety. Anesthesia safety is a prime example (Leape, Berwick & Bates, 2002). Mortality during elective anesthesia has declined 10-fold in the past few decades. But this achievement was not driven by rigorous scientific evidence that certain practices reduced mortality. It wasn’t attributable to any single practice, new medication, or technology. Instead, it required a broad array of changes in processes, equipment, organizational leadership, education, and teamwork— not one of which has been singled out and proven to have a clear-cut impact on mortality. Rather, safety was achieved by applying a whole host of changes that:

  • were based on an understanding of human factors principles;
  • were based on clear linkage between certain processes and observed adverse events;
  • were learned from the safety practices in other industries;
  • made sense, considering the potential risks and benefits of the interventions (Leape, Berwick & Bates, 2002).

These criteria, thencommon sense, human factors principles, linkage between processes and adverse events, and safety practices in other industriesshould not be given short shrift in favor of evidence-based interventions alone. In fact, it would be tragic to abandon safety initiatives like pharmacy IV admixture systems and computer-generated medication administration records simply because they’re not backed by rigorous scientific evidence. And to await irrefutable proof of effectiveness is simply not an option. We must make informed decisions based on the best available information and common sense.

This column was prepared by the Institute for Safe Medication Practices (ISMP), an independent, nonprofit charitable organization dedicated entirely to medication error prevention and safe medication use. Any reports described in this column were received through the ISMP Medication Errors Reporting Program. Errors, close calls, or hazardous conditions may be reported online at or by calling 800-FAIL-SAFE (800-324-5723). ISMP is a federally certified patient safety organization (PSO), providing legal protection and confidentiality for patient safety data and error reports it receives. Visit for more information on ISMP’s medication safety newsletters and other risk reduction tools.

Leape, L. L., Berwick, M. D., Bates, D. W. (2002). What practices will most improve safety? Evidence-based medicine meets patient safety. JAMA, 288, 501-507.

Smith, C. S., Pell, J. P. (2003). Parachute use to prevent death and major trauma related to gravitational challenge: A systematic review of randomized control trials. BMJ, 327, 1459-1461.