Patient Safety and Health IT

Classen predicts IOM report will be ‘To Err Is Human’ for technology vendors.

Patient safety was one of many topics on the agenda at HIMSS12—the health information technology (IT) conference that drew more than 37,000 professionals to Las Vegas Feb. 21–24. One session in particular highlighted how far we still have to go in providing reliable, safe care and in understanding the role of health IT in safety.

In an hour-long review of the Institute of Medicine (IOM) report, Health IT and Patient Safety: Building Safer Systems for Better Care (2011), David Classen, MD, acknowledged that health IT has potential to improve safety and also to cause harm. He illustrated how little we know about the incidence of harm in general—now 12 years after the IOM’s report To Err Is Human—and especially how little we know about the effect of IT systems on safety after they have been implemented in hospitals. Put into practice, the committee’s recommendations would greatly improve transparency and understanding and would significantly alter the process of health IT procurement and implementation for all, including vendors.

Classen, a member of the committee that authored the report, described the broad three-part charge the committee received from the IOM: to summarize 1) the current state both of patient safety and health IT in the United States, 2) the impact of health IT on patient safety, and 3) to make recommendations for public & private sector actions for using health IT to maximize safety improvement.

The committee surveyed the literature for studies that have measured the number of injuries and deaths resulting from preventable adverse events since To Err Is Human and found only three (Levinson, 2010; Landrigan et al., 2010; Classen et al., 2011). The results of those three indicate that our patient safety problem is growing and may have been underestimated in the past. Greater incidence of harm in recent studies also seems to demonstrate that our understanding of these problems is deeper than it was when first measured. Classen also commented that the work of measuring harm due to preventable error is still imperfect and represents a significant challenge of its own. In these studies, even the top-performing hospitals using electronic records and voluntary reporting systems supported by a non-punitive “just culture” are recording a very small percentage of harm experienced by patients. That said, a 2010 report from the Inspector General of the United States estimates 180,000 deaths per year due to adverse events among Medicare patients alone—much higher than the 98,000 reported as the high end of estimated deaths by the IOM in 2000.

Not surprisingly, the committee also found it very difficult to evaluate the extent of harm that can be traced back to health IT. As was demonstrated by the more than 1,100 companies exhibiting at HIMSS12, there is a multitude of health IT products in the market with a diversity of impacts on workflow. Very little evidence has been published about the effect of technology on safety (most studies relate to mediation safety), and vendor contracts often include legal barriers to studying performance.

Following its research, the committee recognized the importance of viewing health IT as part of an ecosystem of interconnected effects. Patient safety efforts often include breaking down “silos” of activity and influence, and health information technology should not be immune. Trying to improve safety by focusing on one area – such as health IT – in isolation is not likely to be successful. Classen said, “If we think we can oversee health IT alone, without considering the larger system, we’re probably not going to lead to significant improvements in safety.”

Looking directly into the impact of health IT on safety, the committee recognized the importance of workflow and usability and the challenges posed by customization and interoperability. One of the committee’s most provocative observations notes the limitations of certification as a way to insure safety of health IT products. Certifying vendor products off the shelf (as in the system currently authorized by the Office of the National Coordinator for Health Information Technology) might have very little to do with how those products behave in real life. Classen said the committee recognized that, “Once you’ve seen one implementation of a vendor product, you’ve seen one implementation of a vendor product.” The committee recommends thinking about the “life cycle” of implemented technology and devising a method for ongoing safety testing of these systems after they have been installed.

The Leapfrog Group offers an evaluative test for installed CPOE systems, which the committee reviewed as the only known example of post-deployment testing. Classen was co-author of a study that looked at results of the Leapfrog CPOE test, which delivered sobering results. The Leapfrog test is used by hospitals to test their installed EMR systems for critical medication safety functionality such as high-risk drug/drug interactions, allergies, dosing problems, etc. More than 400 hospitals have used it in the past four years to test their deployed systems. The 2010 study looked at the 62 hospitals that had taken the test at that point and found that the CPOE systems caught problems between 10 and 82% of the time. These are real-world systems deployed in sophisticated hospitals. Even more concerning, a subset of orders included in the test and know reliably to kill people were stopped by the CPOE system only 53% of the time. These hospitals—not to mention their patients—assume that the systems are 100% reliable. After asking, “How could that be?”, Classen explained that “these are highly complex and dynamic systems, and you don’t just push a button and turn them on. All sorts of things are going on all the time in these systems, and yet we don’t have any routine post-deployment system testing them.”

Classen then offered a graphical display of the results of the Leapfrog test, which showed that the safety of each deployed system was tied only in part — only 27% — to which vendor product was installed, thereby illustrating the limited relevance of certification as an indication of safety. There was tremendous variation in results within each group of hospitals shown by vendor. In fact, the best performance was demonstrated by Partners Healthcare in Boston, which has refined a home-grown system over years of use and evaluation. Classen pointed out that “It’s all in the implementation.… All the vendor groups should be tightly aligned, and they aren’t.”

Overall the committee made a number of strong recommendations for increased transparency to address the variability in performance and safety:

  • vendors should be prohibited from including “gag” or “hold harmless” clauses in their contracts or any other limitation to hospitals’ ability to share safety problems in performance (this drew applause from the audience),
  • a federal agency should play a major role in sharing performance measures and best practices, and
  • vendors should publish information about their products publicly, starting with EHR products and modules, eventually including all health IT products from EHRs to health information exchange vendors to patient engagement vendors.

This post covers only some of the points Classen made in his presentation and only a portion of the topics covered in the report. The report is available for free download and certainly deserves further discussion.