Javascript DHTML Drop Down Menu Powered by




 (Published 3/6/17 Re Safety Differently)


“Students of patient safety rely on a few foundational models to explain the iatrogenic [i.e. illness due to medical treatment or examination] causes of patient harm. Reason’s classic Swiss cheese model encapsulates the idea that although an organisation such as a hospital has many defences against error (the cheese), once in a while holes in the defences line up to allow an error through. Heinrich’s iceberg model reminds us that while some harm events are reported (the tip), most remain unrecorded because they are relatively minor or do not lead to harm... .” Coiera et al. BMJ 2013;347:f7273).




From the other discussions, the prevailing paradigm today (and for the past 25 years at least) is an expression of the latent condition (Reason 1990/1997/2004). Despite the rhetoric surrounding a ‘new safety paradigm’, an entrenched worldview emerges with no sign of relinquishing the old.


The latent condition (i.e. the metaphor that compels the Swiss cheese model to find a so-called organisational accident), is the longest standing representative of an epidemiological (i.e. multi-factor) approach that was first put forward by Gordon (1949) and then later utilised by, for instance, Petersen (1971) and Bird (1974). Whilst Bird’s (1974) is probably the more widely known of the organisational models of human error (Wiegmann & Shappell 2003), it embodies the same causal philosophy as Petersen (1971) and Reason (2004). For certain, the prevailing paradigm has nothing to do with Heinrich or his triangle. Indeed, the cited professors (and others) difficulties with prediction and linear cause and effect is of their own making. As with Petersen’s (1971) multiple causation theory and Bird’s (1974) ‘update’ or ‘management failure model’, the latent condition results in an exceptionless causal statement that, a). cannot be met (Hart & Honore 2004), b). science nor common sense can accept, and c). purports to make the organisation causatively responsible for any accident it experiences. Similarly, the Swiss cheese model (which attempted to combine conflicting approaches to causation) creates the same problems for users as the others. For instance, Petersen’s (1971) multiple causation theory, Bird’s (1974) ‘updated sequence’ and Reason’s (1997) SCM tend to result in causal over-determination (i.e. they encourage, if not compel, multiple organisational failings to be found) which, in turn, creates problems both for the prioritisation of findings and prediction. Furthermore, the subjectivity promoted and permitted during the ‘search’ for ‘multiple causes’ renders (highly problematically) all accidents dissimilar (Reason 1990: 2004).


In addition to ‘user’ or application problems, problems of philosophy and construction were evidenced by, for example, Reason (1997/2004), a). referencing Hart & Honore’s (2004) ‘mere conditions’, and b). changing the name of his resident pathogen metaphor from latent failure to latent condition (as it happened, a. & b. attempted to correct and excuse the earlier noted causal statement that the SCM (by way of the latent condition) made, and still does). Other problems were also obvious and it was Reason (1990) who, from the outset, declared his terms to be unacceptably vague and the resident pathogen metaphor to be far from a workable theory. Nonetheless, Reason, Hollnagel & Paries (2006) attempted to defend the Swiss cheese model in the wake of the Überlingen air disaster.


When a paradigm is defended by its supporters or, given John Green’s talk of a “new gospel” in the safety differently article, its ‘disciples’, the tendency is to not see, or ignore, the contrary data (Maag 2001); i.e. a ‘blinding’ can occur (Wilkinson 2006). Alternatively, the contrary data is seen but, dismissed as irrelevant because it does not conform to the existing paradigm (Weingand 1998). This problem, known as the paradigm effect, results in ‘paradigm paralysis’ (Barker 1992), resistance to change (Maag 2016) and, hence, the status quo (or, perhaps, business as usual). In some cases, it has been associated with a lack of resilience (Wilkinson 2006), inflexibility (Maag 1999), and breaches in ethical behaviour (Robinson-Easley 2017). Unfortunately, the effect is ‘assisted’ by the fact that the prevailing paradigm tends to determine not only the questions that are asked but, the kinds of data that are considered relevant and the ways in which it will be analysed and interpreted (Barber 2013). Nonetheless, the historic problem of recurrent and seemingly unpredictable accidents and disasters signifies a paradigm in true crisis (indeed, given Reason’s declarations above, the models emerging from the general philosophy have been in drift and crisis (Kuhn 1962) from the outset).


Whilst a move away from all things multi-factorial or latent would be a revolution in the Kuhnian (paradigm) sense, the only change being advocated by the ‘prevailing school’ is for others to look at the problem from a different perspective (Hollnagel et al 2016) (which is odd since a paradigm shift presumes that the ‘scientist’, him or herself, has adopted a different perception or view of the world or some part of it (Johnson 2010)). True, an important ingredient in revolution is a change in perspective but, the sort of change being advocated does not constitute one (Weinert 2009). Weinert and many others (e.g. Deutsch (1991); Summers (1992)) also see ‘explanatory gain’ as essential but, aspects (e.g. complexity, emergence) of the ‘new’ vocabulary ‘allow’ nothing to be provided there either. On the contrary, all that has been offered is an alternative causal description (not an explanation) of the ‘problem’. Indeed (and noting that Dekker and Hollnagel make it clear that the old should remain), not only is explanatory gain absent, there is no explanatory loss either.


Despite Kuhn (1962) having used the word paradigm in more than 20 different ways (Masterman: in Lakatos & Musggrave 1970a), supporters of his work might say that a new paradigm cannot be built on the old nor pronounced by virtue of a small change in a science or a modified theory. They might also offer that the new must supplant the old because a). the two are incommensurable, and b). the fundamental assumptions of the old are rejected. Indeed, they might offer that to do things differently would be to “reject science itself”. Alternatively, could Kuhn’s most militant objectors accept safety differently and its underlying philosophy as a new paradigm? Perhaps Masterman (in 1970a), a critic of Kuhn with whom Kuhn (1970a) later agreed, can assist where she says that a fundamental paradigm...


“must be a construct, an artefact, a system, a tool; together with the manual of instructions for using it successfully and a method of interpretation of what it does.” 



On the other hand, Kuhn (2000) also recounts Masterman as saying,...


“a paradigm is what you use when the theory isn’t there”.




Interim Comment:

With nothing changing, and neither explanatory loss nor gain either, safety differently is closer to a synonym than a paradigm? As such, have disciples of “the new gospel” succumbed to what Taubes’ (2008) has called a pseudo-scientific enterprise?; i.e. an enterprise “that purports to be a science and yet functions like a religion”. As it stands, safety differently might be just another fad or ‘fashion’, as opposed to a fundamentally new approach (Potthast 2009) that supporters are committed to. Indeed, having criticised linear cause and effect, Hollnagel (2014 emphasis added) says “this leaves emergence as the only alternative principle of explanation that is possible, at the moment at least”.



Ahead of the Main Summary

Whilst the last line above concludes that discussion for now, the following paragraphs commence to recap and further highlight certain aspects of our various discussions ahead of the main summary.


If professor Hollnagel’s last line above might seem less than fully committed to some, it might also seem a little odd given that emergence has been part of the ‘old’ philosophy since the mid-1980s (Hollnagel 2004: 2016). 


According to the prevailing paradigm, accidents are emergent properties of systems (Dekker, Hollnagel, Woods & Cook 2008) and so too is “safety” (Cook 1998). Those sentiments were also expressed as “outcomes emerge from human performance variability, which is the source of both acceptable and adverse outcomes” (Hollnagel, Wears & Braithwaite 2015); and, “safety, as well as failure, is an emergent property of a system trying to succeed” (Dekker 2001).


Elsewhere, our discussions saw how the above views on causation were put in the 1970s. For instance, Petersen (1971) offered that unsafe acts/conditions and accidents were all “symptoms of something wrong in the management system”. Later, regarding the same, Bird (1974) would offer that “there is one important thing common to all. Each and every one is only a symptom of the basic cause that permitted the practices or conditions to exist”.



Also from the various discussions, we saw how the cited professors have held a ‘Petersen/Bird type’ philosophy since well into the last century (the opening paragraph by Coiera et al adds further weight to that suggestion). However, Dekker (2006) offers that the “new view” is that “human error is a symptom of trouble deeper inside a system” (note: Dekker’s (2006) later focus on Amalberti (already referenced elsewhere in our discussions) makes it clear that he (i.e. Dekker) and Petersen are talking about one and the same thing as regards ‘a system’ and ‘the management system’: also see Reason (2013)).


Elsewhere in our discussions, we saw how Petersen’s (1971) rejection of Heinrich’s common cause hypothesis was erroneous (Davies et al 2003). Similarly, Dekker and Hollnagel’s (the professors’ are cited purely for reference) view that the common cause hypothesis is wrong was also rejected (that said, it is noted that Dekker (2015) now believes the common cause hypothesis to be only “probably wrong” at, or beyond, Amalbertis’ (2001) 10-7). However, those rejections and the elsewhere highlighted discrepancies in the cited professors views on the CCH should be further considered alongside, not least, the penultimate paragraph here.  


Elsewhere, these discussions have suggested that notions such as emergence, resilience and complexity are one and the same and that an end result is the retention of the old philosophy and certain aspects of its vocabulary (e.g. symptom, latent condition, organisational accident). Reason (2013) makes it clear that resilience refers to (as he puts it) the “safety health” of complex technological systems and Dekker et al (2008) offer that an “important concept in resilience is that of emergence”. Dekker et al then discuss how, according to them, things such as interaction, feedback and cross-adaptation result in “far more complex behaviour” of a collective and, hence, its individuals. They then offer that such effects (i.e. “simple things” generating “complex outcomes”) are “impossible” to describe by way of linear cause and effect and can only be understood by way of “complexity theory”. For them, an organisation is seen as a “living system” from a “systems thinking” or “systems perspective”. However, Reason (2004) believes that it is “now recognised” that the “reasons” for errors and violations in “complex systems” are “latent conditions”. Furthermore, Reason (2008) says that whilst “unsafe acts themselves are frequently unpredictable”, the “latent conditions that give rise to them are evident before the event”.     


With the above in mind, and recalling that Hollnagel is one of the authors in Dekker et al (2008), Hollnagel (2014) believes that “Emergence can be found in all types of events, not only in the serious ones”. He continues, the “reason that it is more easily noted in serious ones is that they are too complicated for linear explanations to be possible....Emergence is nevertheless also present in many events that are less serious, but is usually missed - or avoided - because we only really put an effort into analysing the serious ones. Emergence thus forces its way to the front, so to speak, when it is impossible to find an acceptable single (or root) cause” (Hollnagel 2014). Of course, the strategy employed by many (e.g. Hollnagel, Dekker and Reason) ensures that a single cause is never found (Davies et al 2003). Indeed, it has been said that there is a “dogmatic insistence” to find latent conditions (Young et al 2004) even though “a condition is not a cause” (Reason, Hollnagel & Paries 2006).


Finally here, the opening paragraph by Coiera et al (2013) above makes a clear, uncomplicated statement regarding Heinrich’s (1931) triangle (noting, of course, that they refer to it as ‘iceberg model’ as per Hollnagel and Reason). Also, it should be clear that an obviously linear and sequential description of an accident emerges from Coiera et al‘s discussion of Reason’s (1990/1997) Swiss cheese model. Recall also that they refer to Reason’ Swiss cheese model as a “foundational” model.     



The Institute of Industrial Accident Investigators. All rights reserved.