Key Questions for Safety Projects

January / February 2013
alt

Online First

Key Questions for Safety Projects

Safety-related projects may arise from root cause analyses of actual incidents, other structured risk identification efforts (e.g. failure modes and effects analysis), or external reports of adverse events that occurred elsewhere (e.g. Joint Commission Sentinel Events). An appropriate response to such information may be to undertake an effort to review and, where necessary, revise processes and technology so that the identified event does not reoccur, or not occur at all if the impetus is external information.

To properly define such projects, keep them on track, and assess their effectiveness, I suggest the following key questions:

  • What, exactly, is the problem?
  • What, exactly, is the proposed solution?
  • How, exactly, will the proposed solution solve/mitigate the problem identified?
    • What new issues/hazards will the proposed solution raise?
    • How will the project be implemented, over what time frame, by whom?
    • How will the solution be verified and validated?
    • What is the plan for transmission from old to new?
  • How will we know/measure that the problem has been solved/mitigated?
    • Will backsliding be possible? If so, how will compliance be monitored and sustained?
    • What new issues/hazards has the solution raised?

In the list above, the original, core questions are bulleted. The secondary questions fill in some of the operational details for producing an effective solution to a problem.

What, exactly, is the problem?
A clear definition of the issue being addressed is an essential starting point in defining appropriate mitigations and in subsequent assessment of whether or not the project has been successful. Good project definition can also help control “scope creep” in which the goals of the project morph, drift and grow while the project is underway. While redefinition and expansion may be appropriate at times, at a minimum, change must be understand and positively acted on, and without the original objective being lost. Furthermore, it may be necessary at times to assign new ideas to a subsequent or follow-up project rather than have a project endlessly expand, change, and never reach completion.

What, exactly, is the proposed solution?
The proposed solution to a safety issue should at some point clearly address the specific problem that has been identified, i.e. how does this solution address the actual issue? This can prevent a project from being developed that, while possibly interesting, doesn’t really address the original problem. Of course it may take some time and effort to develop suitable solutions, but these solutions must always be directly linked to the problem. Further, for candidate solutions, and for implementation, it should be clear what the proposed or selected solutions are.

How, exactly, will the proposed solution solve/mitigate the problem identified?
This question requires that how the project undertaken (the solution) actually addresses the original problem statement. Too often, projects develop that don’t actually or clearly address the original issue. In this case, we may get a new system or procedure, and it may be a fine system or procedure, but the original issue will remain unaddressed.

What new issues/hazards will the proposed solution raise?
An important philosophy in designing devices or systems is that safety is not an automatic by-product of earnest effort, but a task unto itself. In this regard, it is necessary to be pessimistic and ask “What can go wrong?” rather than simply assume that everything will go right. The Pollyannaish among us should not serve on this committee. In this regard, any new system—be it technology or task design—can introduce new challenges and hazards that the old system did not have. A simple example here is that the implementation of a wireless solution to a communication problem might be dependent on the already over-utilized network Wi-Fi spectrum. Not only, therefore, may the added applications fail, but older applications may be interfered with and also fail.

Consideration of “What can go wrong?” should be an earnest, structured, and serious effort, with possibilities actively addressed before they are dismissed. This must not be just an exercise to go through, but a real part of the design process.

How will the project be implemented, over what time frame, and by whom?
Once a project approach is selected, there should be an early effort to understand the requirements in terms of human and financial resources. Many projects will not be able to be accomplished based on vague assumptions that already busy personnel will be able to generate enough time to do the new tasks in a reasonable time frame. Similarly, financial resources may be required to obtain new physical systems or to hire dedicated personnel. Standard project management techniques can be used here such as milestone charts that identify well-defined tasks, who will do them, how long it will take, progress, and how the tasks are linked, e.g. Task C cannot be done until A and B are completed, but perhaps Task D can be initiated in parallel—if there are personnel to do it.

How will the solution be verified and validated?
Even in advance of a new procedure being developed and put in place, it is appropriate to have a test plan, and then to actually exercise it. Such a plan should address well-defined, specific attributes of the new system that can be measured under realistic scenarios. While this is always important, it may be particularly important when contractors or vendors have been engaged to implement the project. It is important to ask, “How will the contractor/vendors work be assessed?” If this is not based on agreed upon criteria, there will be ample opportunity for customer and vendor to disagree on whether the project is completed, or if additional necessary work is within the original scope or if it constitutes an “add-on” at additional time and cost. Perhaps worse is when the vendor is paid and gone, but the system turns out not be satisfactory.

While verification and validation are sometimes used interchangeably (and many dictionaries cross reference them), a useful distinction between them can be made, as is the case in FDA-regulated medical device design. Verification means, “Did you create what you meant to create?” while validation addresses whether or not the thing you created actually solves the original real-world problem and in way that the users will find useable.

What is the plan for transitioning from old to new?
It is often the case that project development occurs in the background as systems are developed and at least some training takes place. For such projects there will come a time (hopefully) when the project is or appears to be done and it is time to stop doing the old thing and start doing the new thing. An old model is to plan this transition for the middle of night, when system utilization is low. A new model is to plan the transition around a day-time shift change so that there will be maximum availability of personnel to deal with what perhaps will be inevitable problems as the new system is brought up. A related control is to be able to easily revert to the old system if the new system proves to be less than functional. In the software world, this sometimes has the glorious name of a “rollback upgrade,” which means that the new programs don’t work, so we need to go back to the old ones.

The need for post go-live support is also a planning issue. How much support by technical personnel and/or trainers will there need to be resolve problems and assure that users can operate the new system? And how long will they have to be in place? This is another good place for pessimism rather than optimism. In this regard, as in many, it is better to be over-prepared than under-prepared.

How will we know/measure that the problem has been solved/mitigated?
In both planning and implementing a new system, it is valuable to know in advance what will be measured in order to determine if the solution as implemented actually addresses the original problem (see question 1 above). Without this measurement step, there will be something new, which may be nice, but the old problems may still remain.

A particular challenge here is solutions that are put in place to prevent rare, or extremely rare, events. In this case, the adverse event is unlikely to recur because it was unlikely in the first place when measured over some time frame. The question of prevention, therefore, may be difficult to actually demonstrate. When this is the case, it may be that other values can be identified and demonstrated. In some drug studies, this is called surrogate end points, i.e. it cannot be proved that cancer can be cured but it can be shown that certain indicators of cancer are reduced.

Another approach here can be simulation, in which the inputs potentially responsible for a bad outcome are artificially created (under strict control and observation) to demonstrate that if those inputs were to occur at some future time, the bad outcome will not occur.

Will backsliding be possible? If so, how will compliance be monitored and sustained?
In some solutions, the old technology will be gone, and the question of doing things the old way will be moot. In other solutions, particularly human-mediated procedures, new behaviors may be short lived, and people will be able to revert to their old methods. For example a requirement for cross-checking patient identification may be instituted, but it may in turn be found that without strong supervision, people stop doing it. Or a rigorous methodology may be created, but only a superficial version of it may exist after some time. The implementation of “time-outs” could fall into this category if conducting the time-out becomes a hurried, rote, low-effort behavior rather than the careful and earnest procedure that was intended.

What new issues/hazards has the solution raised?
Prediction and mitigation of unintended effects was addressed above in the planning stage. Once the new system is implemented, it is appropriate to be actively vigilant for side effects that were not designed out, and for others that had not been predicted. Alarm communications systems can be an example here if the expectation of external alarm monitoring leads to lower rather than higher local vigilance, while at the same time the central system is not as effective as intended.

Summary
Designing and implementing improved safety procedures can certainly be challenging. As with most endeavors, active and effective management can be helpful in assuring that a project is well-defined, well-implemented, and well-monitored for effect and consequences. Key questions such as those addressed here can be helpful in this regard if taken seriously. On the other hand, key questions can themselves become just another rote activity: e.g., Ok, let’s answer these stupid questions that they are making us answer so we can get back to work.  Management tools cannot substitute for seriousness of purpose.

William Hyman is professor emeritus of biomedical engineering at Texas A&M University. He now lives in New York where he is adjunct professor of biomedical engineering at The Cooper Union. Hyman may be contacted at w-hyman@tamu.edu.