From Punitive Action to Confidential Reporting. A Longitudinal Study of Organizational Learning from Incidents

 

September / October 2007

From Punitive Action to Confidential Reporting
A Longitudinal Study of Organizational Learning from Incidents

Common sense and practical experience dictate that organizations with effective reporting systems are able to learn from smaller mishaps and incidents so as to forestall serious workplace accidents (Reason, 1997; Connell, 1998; Johnson, 2001; 2001; Sullivan, 2001). Confidentiality is clearly important in mediating the number of reports. Systems that have shifted to confidentiality all show a huge increase in willingness to report as measured by the number of reports received (e.g. Madsen, 2001; Noerbjerg, 2004). The consensus is that fear of retribution, either by immediate superiors, by others in the employing organization, or by another agency, hampers people’s willingness to report. Conversely, non-punitive systems generate more reports — and by extension, more learning — because people feel free to tell about their troubles (particularly if they see their line managers as involved in creating those troubles). Indeed, confidential systems whose contributors have been threatened with exposure through, for example, judicial proceedings, show a dramatic drop, or even a complete drying-up of reporting (Dekker, 2003). Reports that will be treated confidentially also differ in substance from other forms of occurrence reporting — they typically hold greater candor and higher psychosocial resolution (O’Leary & Pidgeon, 1995).

While the response to, treatment of, and countermeasures after an incident are crucial for the willingness of (other) employees to report (Madsen, 2001), there is less empirical basis for a connection between the amount of reporting and the kind and quality of learning that takes place. A large part of this lack could lie in the difficulty of defining organizational learning, and thus in tracing how interventions such as the encouragement of reporting (e.g. by making it confidential) influence it. For example, Salas et al. (in press) reviewed 58 studies of safety interventions in several industries and were forced to conclude that the effects on organizational learning were so confounded as to turn out virtually undemonstrable. Moreover, in very safe systems the statistical baseline of serious incidents (let alone accidents) is so low, that numerical demonstrations of learning (by counting fewer serious events) are impossible, too (Reason, 1997; Amalberti, 2003). In fact, as failure rates fall, the ability to learn may fall as well (AMA, 1998).

Confidential reporting systems are thought to help in organizational learning as they can reveal safety problems encountered by individual reporters that would otherwise never have become known to the rest (O’Leary & Chappel, 1996). Stories of individual encounters with risk, if distributed back into the operational community, represent a powerful vehicle for the kind of vicarious learning that contributes to the learning cultures of high-reliability organizations (Rochlin, 1999). But this is not the only mechanism, and it underestimates the importance of analysis and intelligence necessary to make sense of reported data. Reporting systems also help organizations because they allow the mining and assemblage of a diversity of data into a bigger picture — data points that individually would not tip off systemic vulnerabilities or safety problems.

An example is is the confidential NASA ASRS in the United States (the National Aeronautics & Space Agency’s Aviation Safety Reporting System), which is one of the largest safety reporting systems, with an annual average of 30.000 reports (Connell, 2002). A critical ingredient in ASRS’ success is its impartiality and independence from the regulator and enforcement agencies, as well as reporters’ own employing organizations (Reynard et al., 1986). Reports are analyzed by domain experts and then shunted along various routes towards learning, from targeted alerts to manufacturers, operators, or other industry stakeholders, to widely distributed newsletters that cover a recent set of reports along with commentary.

A reporting system such as ASRS recognizes that the ability to make progress on safety by individuals is limited. People and organizations may miss or misperceive vulnerabilities and how they might come together to create pathways to failure (AMA, 1998). Learning, then, starts with pooling the diverse data, with connecting the dots through expert insight, and recognizing systemic vulnerabilities. Yet analytic resources at ASRS are limited too, so often learning is reduced to sending stories of individual risk encounters back to others who might end up in the same situation. It could of course be argued that learning is not even complete until the insight engenders some type of change in the industry, but following up on this is difficult, costly, and largely outside the stated scope of ASRS.

While entertaining no illusions about our ability to illuminate an area as huge as organizational learning, we wanted to find out more about the connection between confidential reporting and learning. Particularly, we wanted to examine whether it is chiefly the lack of retribution that makes people report more (and more useful information that promotes organizational learning), or whether there are additional intervening variables at work.

Method
In the research reported here, we were able to trace a safety-critical organization over a period of 2 years as it attempted to convert from line-management-driven punitive incident reponses to a confidential reporting system run by the safety staff. The organization, which itself wishes to remain anonymous, employed a total of 1,400 people, of whom 400 were front-line operators — those in direct operational contact with the safety-critical process. It had run up against the limits of the so-called blame cycle (Reason, 1997). Incidents had been seen as a result of human error, triggering reprimands and extra training for individuals, which often resulted in a repetition of the incident (but by a different operator) as basic working conditions were left unchanged. While the organization thought it was doing what it could, the incident count did not go down.

As it often does, this opened up a window for new approaches, and the organization was interested in getting to know about different ways of dealing with incidents and their reporters as a possible route to greater learning. With guidance, a safety staff was set up and given a broad mandate for devising an incident report collection and analysis process. The basic transition was as follows (and happened about 6 months into the 2-year project reported here):

 

  • Before, the employee involved in an incident had to report to his or her line manager, who would then devise corrective actions (mostly a reminder to watch out, some extra coaching, or retraining for the individual involved). Reporting was hardly voluntary; employees were compelled to report on their own safety performance problems because they knew that others who interacted with their safety-critical process would otherwise discover and report them — something that could lead to even harsher consequences.
  • After the transition, the employee could bypass line management and report the incident (on paper or in person) to a newly revamped safety staff (consisting of operators) who would then try to extract broader learning leverage from the reported occurrence, often together with the person involved. This person would not be connectable to the occurrence by anyone other than safety staff.

 

During the 2 years of this project, we interviewed numerous participants at different levels in the organization, and were closely involved with the developing safety staff and its activities. Interviews were structured around the following 10 questions. Questions 1to 7 were asked during interviews in the period before the transition. Questions 4 to 10 were asked after.

 

  • Describe the process of filling in the reporting form and what happens afterwards. What feedback do you get? Did you get an interview with your line manager and how did you experience that?
  • Describe the process of operational incident reporting and incident management within the company. How and when do you fill in a report?
  • Was there any focus on learning? If yes, how?
  • Accessibility of reporting forms?
  • Was it easy to fill in a report?
  • What did you see as the purpose of submitting a report?
  • Do you feel that your interview and the following report written about it really captured the essence of the incident?
  • Does the confidential process have an influence on your motivation to report incidents?
  • Have you observed any shift away from using reminders and procedures as countermeasures to achieve change within the organization?
  • Are behavior-directed programs still used as a means for making progress on safety?

 

Our main group of interview participants consisted of operators filing reports (both before and after the transition). We sought to answer how well operators liked the new reporting scheme; what they learned from participating in it (if anything) that they didn’t before; what changes in people’s job behavior occurred that could be linked to the new reporting scheme, and whether there were any other tangible results from it, particularly in terms of producing greater leverage for organizational learning. For the latter purpose we also reviewed considerable archival material, particularly incident reports written inside the organization, to learn more about the conceptualization of risk sources and proposed countermeasures before and after the transition.

Results and Discussion
Taken at face value, findings confirm that fear of retribution hampers safety reporting. When the organization shifted from line-management-based evaluations of reports to a confidential safety staff dealing with reports, the number of reports went up. People’s reported willingness to send them in went up too, as did the relevance and resolution of their content.

Confidentiality Revisited
But more seemed at play. Before the transition, employees actually turned out very ready to confess an “error” or “violation” to their line manager. It was almost seen as an act of honor. Reporting it to a line organization — which would see this as a satisfactory conclusion to its incident investigation — produced rapid closure for all involved. Management would not have to probe deeper, as the operator had seen the error of his or her ways and had been reprimanded and told or trained to watch out better next time. For the operator, simply and quickly admitting an error avoided even more or deeper questions from their line managers, and could help avert career consequences, in part by avoiding information from being passed up or on to other agencies (e.g. the industry’s regulator). Fear of retribution, in other words, did not necessarily discourage reporting. In fact, it encouraged a particular kind of reporting: a mea culpa with minimal disclosure that would get it over with quickly for everybody. “Human error” as cause seemed to benefit everyone — except organizational learning. Here is an example:

I didn’t tell the truth about what took place, and this was encouraged by the line manager. He had made an assumption that the incident was due to one factor, which was not the case. This helped me construct and maintain a version of the story, which was more favorable for us (the frontline employees).

First and Second Stories
In the few cases where reports of errors did go up the line into the organization before the transition, directives typically came back exhorting frontline staff to watch out more carefully for that particular problem or to adhere more stringently to a rule or procedure that already existed. What lacked was the notion that organizational learning through reporting happens by identifying systemic vulnerabilities that all operators could be exposed to. Not by telling everybody to pay more attention because somebody did, on one occasion, not do so. Only by constantly seeking out its vulnerabilities can an organization develop and test more robust practices to enhance safety (AMA, 1998; Cook, 1998). But this puts a particular premium on what kind of reports — and what kind of reporter treatment — would be useful. If learning hinges on the ability to dig out systemic vulnerabilities, then reports and organizational encounters with reporters need to go beyond the phenotypical “errors” or “violations” that may have served as the report’s trigger. They need instead to engage the so-called “second stories” (Woods et al., 1994) (see table 1).

Table 1. The Contrast Between First and Second Stories of Failure

  • Human error (by any other name: violation, complacency) is seen as a cause of failure.
  • Saying what people should have done is a satisfying way to describe failure.
  • Telling people to be more careful will make the problem go away.
  • Human error is seen as the effect of systemic vulnerabilities deeper inside the organization.
  • Saying what people should have done does not explain why it made sense for them to do what they did.
  • Only by constantly seeking out its vulnerabilities can organizations enhance safety.

The distinction between first and second stories of failure has been useful in driving change across several domains (e.g., AMA, 1998; Cook, 1998; Dekker, 2002) and it provided a good hinge in ours, too. First stories reveal how an outcome could simply have been avoided if the people involved had invested a little more effort, or had been more careful. They fall back on “human error” as explanations and stop there, making people and organizations wonder how they can possibly cope with the unreliability of the human element in their midst. Here is an example of a first story — a de-identified organizational memo documenting the countermeasures after a particular incident:

The incident has been discussed with the concerned operator, pointing out that priorities have to be set according to their urgency. The operator should not be distracted by one single problem and neglecting the rest of his working environment. He has been reminded of applicable rules and allowable exceptions to them. The investigation report has been made available to other operators by posting it on the internal safety board.

Here is another:

Head of operations interviewed the operators after the incident. They were reminded about correct and safe planning as well as good monitoring of their process in case of a slightly tight situation.

Personal attributions would be made to help explain why things went wrong (for example, a line manager blaming an operator’s “aggressive attitude”). Second stories, in contrast, make different attributions to find out why things go wrong. They reveal the multiple conflicting goals, pressures, and systemic vulnerabilities beneath the “error” that everybody in the system is exposed to. Second stories use human error as a starting point, not as a conclusion. Digging for second stories is crucial to learning as it promotes the discovery of systemic vulnerabilities. Recognizing these is a precondition for making organizational investments to cope with the real sources of risk: the genotypical contributors to failure.

In some cases before the transition, safety improvements were thought to result from getting rid of “bad apples” who contaminated or undermined an otherwise safe system. Individuals were seen as sole sources of failures and problems. As per one memo:

The involved trainee has been terminated, he is not working as an operator any more. His incident will cause further investigation about roles and responsi-bilities and may lead to disciplinary sanctions.

After the transition, such individually oriented countermeasures became rare. Incident reports and investigations came up with deeper contributory sets that could not be ignored and that took line management into different areas than before. Learning became possible because systemic vulnerabilities had been identified, reported, studied, contextualized, and checked against operational expertise.

Safety Reports and Levers for Learning
After the conversion to a confidential system run by the safety staff, the safety investigation reports written on the basis of operator interviews and other data typically began to contain a larger set of contributory factors. They also shed language such as “the operator should have…”, or “if only the operator had…”, instead trying to probe the reasons why it made sense for operators to do what they did. This would automatically offer an entry door into second stories, as investigators were forced to dig deeper into the organization for systemic reasons behind operators’ performance. Simple causal statements gradually made way for more complex etiologies that could take an entire paragraph. Operators felt that levers for organizational learning were being identified, in sharp contradistinction with the previous regime. Here a spontaneous reaction:

I congratulate you with this report. I only hope that your suggestions will be heard and actions will be taken at higher echelons. This way we can all profit from one incident.

Getting to second stories is clearly a precondition for finding these leverage points and making systemic changes to working circumstances (see also Woods & Cook, 2002). But this requires that incident reporters are met not only in a non-jeopardizing setting, but also by somebody who understands their work, who can ask the right questions and ask them legitimately, and enter into a meaningful dialogue to jointly discover more. Of course, identifying systemic leverage points does not guarantee organizational learning. But it represents a precondition for learning.

Employee Empowerment
The shifting nature of interviews with the recipients of incident reports (first, line managers who may have entertained a distant view of real practice, then safety staff, consisting of operators closely connected to the actual nature of work) introduced an element considered key in the creation of a safety culture: employee empowerment (see Wiegmann et al., 2002). Offering operators the opportunity to actively contribute to the conceptualization of risk and the search for systemic vulnerabilities underlying it appears to be motivating. In fact, interviews revealed that the chief reason why operators’ willingness to report went up was not the lack of retribution, but rather the realization that they could “make a difference.” Giving operators the leverage and initiative to help achieve safety gains turned out a large motivator to report. It gave them part ownership in the organization’s safety record.

An important factor for this to work did turn out to be the legitimization of questions about operator performance and the context in which it occurs. In the organization studied here, that was done by having the safety staff consist of operational employees:

It is very good that a colleague, who understands the job, performs the interviews. They asked me very good questions and pointed in directions that I hadn’t noticed. It was very positive compared to before. Earlier you never had the chance to understand what went wrong. You only got a conclusion to the incident. Now it is very good that the report is not published before we have had the chance give our feedback. You are very involved in the process now and you have time to go through the occurrence. Before you were placed in the hot seat and you felt guilty. Now, during interviews with the safety staff, I never had the feeling that I was accused of anything.

Raising Awareness
Before the transition, organizational learning was thought to be accomplished through reminders and reprimands, and through the top-down dispensing of awareness about a problem that a particular operator had been exposed to. While raising awareness of safety problems is not thought to have any sustained effect (Reason, 1997; Johnson, 2001), results here indicate that it can have such an effect, but only under near-perfect circumstances. Particularly, awareness should be raised by a peer, somebody who has legitimacy and knowledge to speak about the issue. It should be specific enough to target recognizable situations. Discussions work much better than posters. One-on-one instruction works even better. A sustained effect also demands follow-up and appropriate repetition.

Table 2: Results of the Conversion From Punitive Response to Confidential Reporting

 

Conclusion
While our results do not contradict the basic wisdom of confidential reporting, they suggest that employees’ willingness to report hinges on more than a lack of fear of retribution. The results identify a more complex relationship between retributive probability and reporting. In the old, punitive system studied here, employees were actually eager to report (a particular version!) precisely so they could get off the hook. Our results show that willingness to report could be mediated less by a fear of retribution and more by a feeling of empowerment, of being able to cooperate in creating organizational safety, to feel ownership, a stake, or co-responsibility for the safety record. The transition reported here gave employees precisely that, something that not only triggered congratulatory comments from operators, but actually provided the organization with new leverage points for learning.

Offering people the ability to construct second, deeper stories of the incident appears to be a basic precondition to help organizations learn and improve. This must be done together with, and facilitated by, a safety staff who demand no prestige and consist of expert people who can legitimately engage employees in discourse around operational matters. Once such relationships are in place, the creation of employee awareness cannot be dismissed as an unsustainable mechanism of organizational learning. Again, it turns out to be more complex. The creation of awareness is possible and sustainable, but only where it is dispensed by experts on the subject, preferably in one-on-one instruction rather than through broad announcements (e.g. posters), and targeted to specific instances of practice.

Organizational learning becomes a separable activity only if we use a most mechanical metaphor for an organization: that of a machine with parts and interconnections, where learning is a matter of polishing or replacing parts, adjusting interconnections. But learning is ongoing. Learning is part of an organization’s normal adaptive life. A better analogy may be organizations as living systems (see Capra, 1996; Hollnagel, Woods & Leveson, 2006). Learning in that case shifts to a consideration of the organization’s ability to recognize, adapt to, and absorb perturbations that may take it outside its design base. Learning is about constantly monitoring whether that ability, that organizational resilience, is still present. This involves calibrating the organization’s models of risk — are they still up to date? Our research here followed an organization that was learning how to learn. It had concluded that its model of risk (unreliable people in an otherwise safe system) was obsolete or at least not returning any valuable lessons. The logical endpoint of that journey — of the organization studied here as well as others — should be the realization that learning is never complete; that the knowledge base from which the organization derives its assumptions, its questions and examinations of its own operations, is forever incomplete and provisional. Learning how to learn involves a second-order commitment: a relentless monitoring of how the organization is learning from failure, what models of risk those learning practices are based on, and whether they still apply.


Sidney Dekker is professor of human factors and system safety and director of research at Lund University School of Aviation in Sweden. Author of Ten Questions About Human Error (Erlbaum, 2005) and the Field Guide to Understanding Human Error (Ashgate, 2006), he has been appointed as scientific advisor on healthcare system safety to the Winnipeg Regional Health Authority in Canada, and will be visiting professor in the Centre for Research Excellence on Patient Safety, Department of Epidemiology and Preventive Medicine at Monash University in Melbourne, Australia. Dekker holds a PhD from Ohio State Unversity. He may be contacted at Sidney.Dekker@tfhs.lu.se.

Tom Laursenis an air traffic controller with professional experience in Denmark, Bahrain, and Switzerland. He has worked in safety management and has been head of incident investigation, implementing a new confidential reporting system that involved a wholesale change of investigation methods and the creation of a new safety organization.

References

Amalberti, R. (2001). The paradoxes of almost totally safe transportation systems. Safety Science, 37, 109‚126.

American Medical Association. (1998). A tale of two stories: Contrasting views of patient safety. Report from a workshop on assembling the scientific basis for progress on patient safety. Chicago, IL: National Patient Safety Foundation at the AMA.

Capra, F. (1996). The web of life: A new scientific understanding of living systems. New York, NY: Anchor Books.

Connell, J. C. (2002). Voluntary, confidential safety reporting in aviation: The NASA Aviation Safety Reporting System. Moffet Field, CA: NASA Ames Research Center.

Cook, R. I. (1998). Two years before the mast: Learning how to learn about patient safety. Proceedings of Enhancing Patient Safety and Reducing Errors in Health Care Rancho Mirage, CA November 8-10, 1998.

Dekker, S. W. A. (2002). The field guide to human error investigations. Aldershot, UK: Ashgate.

Dekker, S. W. A. (2003). When human error becomes a crime. Journal of Human Factors and Aerospace Safety, 3(1), 83-92.

Hollnagel, E., Woods, D. D., & Leveson, N. G. (2006). Resilience engineering: Concepts and Precepts. Aldershot, UK: Ashgate.

Johnson, C. W. (2001). The limitations of Aviation Incident Reporting. Proceedings of the HCI Aero 2000: International Conference on Human-Computer interfaces in Aeronautics (pp. 17-22).

Madsen, M. D. (2001). A study of incident reporting in air traffic control: Moral dilemmas and the prospects of a reporting culture based on professional ethics. Workshop on the Investigation and Reporting of Incidents and Accidents (IRIA 2002), Glasgow, UK.

Noerbjerg, P. M. (2004). The Danish non-punitive reporting system. Civil Air Navigation Services Organization (CANSO) News, April/May, 1, 4-5.

O’Leary, M. & Pidgeon, N. (1995). Too bad we have to have to have confidential reporting programmes. Flight Deck, 16, 11-16.

O’Leary, M. & Chappell, S. (1996, October). Confidential incident reporting systems create vital awareness of safety problems. ICAO Journal, 1, 11-13 & 27.

Reason, J. T. (1997). Managing the risks of organizational accidents. Aldershot, UK: Ashgate.

Reynard, W.D., Billings, C.E., Cheaney, E.S., & Hardy R. (1986). The development of the NASA Aviation Safety Reporting System (NASA Reference Publication 1114). Palo Alto, CA: NASA Ames Research Center.

Rochlin, G.I. (1999). Safe operation as a social construct. Ergonomics, 42 (11), 1549-1560.

Sullivan C. (2001). Who cares about CAIR? Annual Conference of the Australia and New Zealand Society of Air Safety Investigators. Cairns, Australia.

Salas, E., Wilson, K. A., Burke, C. S., & Wightman, D. C. (In press). Does CRM training work? An update, extension, and some critical needs. Human Factors.

Wiegmann, D. A., Zhang, H., von Thaden, T., Sharma, G., & Mitchell, A. (2002). A synthesis of safety culture and safety climate research (Technical report ARL-02-3/FAA-02-2). Urbana-Champaign, IL: Aviation Research Lab, University of Illinois.

Woods, D. D., Johannesen, L. J., Cook, R.I. & Sarter, N. B. (1994). Behind human error: Cognitive systems, computers and hindsight. Columbus, Ohio: CSERIAC.

Woods, D. D., & Cook, R. I. (2002). Nine steps to move forward from error. Cognition Technology & Work, 4, 137-144.