Written evidence submitted by Dr Michael Lauder MBE and Professor Nigel Lightfoot CBE (CLL0051)

 

Summary

Identifying lessons to be learned is comparatively easy, making sure those lessons are learned is much more difficult and is often forgotten.

We submit our evidence on a systematic approach to learning lessons in the belief that it offers an alternative and more reliable way of learning from the past. We submit our evidence as two individuals who have a long-term professional interest in studying organisational failure and public health. In this submission we would wish to draw the committee’s attention to flaws in the way society learns from the past. We provide examples of these flaws and suggest that anyone conducting an inquiry might like to learn from the past by taking note of them. We suggest that most inquiries deal with complex rather than merely complicated matters and this has consequences for the way recommendations are drawn up. We demonstrate how we use Disaster Incubation Theory as a way of handling this complexity. Finally, we look at how we might use the Pandemic Strategy as a vehicle around which we can collate and disseminate past learning in the hope that we, as a society, will do better next time.

 

Introduction

The premise behind the call for evidence is that we, as a society, should learn lessons from our experience of the Coronavirus. More explicitly, we hope that we can learn from our experience of the COVID19 crisis so that we can ensure that our preparedness for the next pandemic will be better. The question we would ask is, as we seem to have failed to learn from our past experience of pandemics, why should it be different this time?

We submit our evidence in the belief that it offers an alternative and more reliable way of learning from the past. The Inquiry has stated that is wishes to examine “the UK’s prior preparedness for a pandemic”; our expectation is that the intent is to do better next time. We believe that our work has something to add to this debate.

I am Dr Michael Lauder MBE and I work as a private individual researching and writing on organisational failures (and why we do not learn from them). After over 20 years as a military engineer and 10 years of consultancy, in 2008 I undertook doctoral studies at Cranfield where my focus was on the question “why organisations fail”. These studies made me examine how organisations get themselves into trouble, how they try to avoid such situations and what we can do to learn from the past.  On the completion of my studies my first book, “It should never happen again”, published in 2013, examined why we, as a society, fail to learn from inquiries. My book examined the nature and structure of inquiries, the types of recommendations they make and suggested reasons why they failed to stop organisations repeating the same mistakes time and again. Over that last five years working alongside a Belgian Crisis Management Consultancy, my research has been examining ways that the barriers to learning for our experience may be overcome. In January 2020 I wrote a critique of the Grenfell Inquiry Part 1 for the Antwerp Fire Brigade outlining why I considered there to be dangerous and false lessons contained within that report; my conclusion was that the basic paradigm used by the inquiry team (which I have labelled the Perfect World Paradigm) was not fit for purpose when it comes to learning from the past. My research continues to look for and promote an alternative.

As the COVID19 crisis developed, I considered it to offer a valuable case study as it was developing in real time. This enabled the roles of foresight and hindsight within the learning process to be examined. Aware of the limits of my own expertise, I approached Professor Nigel Lightfoot CBE for his expertise in public health matters:   Nigel is a former Director of Emergency Response at the Health Protection Agency and now has his own private company where he continues to focus on emergency preparedness, crisis management and the CBRN terrorism threat.

Our interest in this subject was sparked by a stark inconsistency. In 2019 the Global Health Index produced by The Economist Intelligence Unit in conjunction with John Hopkins Health School ranked the UK 2nd out of 195 countries for its pandemic preparedness. However, from the start, the UK ranked highly in the list of countries suffering from COVID19 related deaths. While there may be some debate about the way these numbers were collected, it is still clear that the UK's response to COVID19 was not as effective as had been hoped. As part of our examination of the reasons behind this discrepancy, we have looked at the preparations made by the UK for handling pandemics. In this case we will be looking at the Department of Health's Pandemic Strategy published in 2011.

Over the course of the summer and as part of our ongoing research, we have watched the development of crisis through the academic lens of Disaster Incubation Theory (DIT). This theory was first developed by Professor Barry Turner in 1976. While many alternatives have been offered, we feel that DIT still offers the clearest way to segment this complex debate. For a fuller description of how this tool may be used see my 2015 book “In pursuit of Foresight”. In the simplest term DIT divides crises and learning into six stages. These are

For the purpose of this submission, our interests are in:

Why do we not learn?

To make the best of learning opportunities, we need to recognise why we fail to learn from the past. While there is a field of study that looks at organisational learning, its main focus is on how learning is delivered rather than how learning is extracted from experience. In contrast, inquiries may choose to learn from aircrash investigation practice where they focus on why an accident happened so that they can identify issues and resolve them.  Their focus is on the future, not the past. Our research into the conduct of public inquiries would suggest five issues that should be considered when we try to learn from the past. These are the role of blame, the mixing of hindsight and foresight, the need to see the whole, the structuring of recommendations, non-equidistancing of recommendations and looking at the problem the wrong way.

Greater desire to blame than to learn

In general, inquiries have a twofold function. The first is to learn from the past. This is normally the espoused purpose of an inquiry but, in practice, this function is often superseded by the second. The second is to determine who might be at fault for any unwanted events that befell the organisation. This function is easier to accomplish and it is a more natural role of those tasked to conduct such inquiries. Research into practice would suggest the need to separate efforts to learn from the need to allocate responsibility.

Hindsight versus foresight

A frequent mistake made by inquiry teams is to confuse the role and application of hindsight and foresight. This can be clearly seen in my analysis of the Grenfell report. When judging the viability of a course of action it is necessary to determine what was known at the time and to exclude what could only have been known with hindsight. To confuse the two often leads to unrealistic recommendations that do not enhance our learning from the past.

No sense of the whole

Inquiries need to recognise that they are often dealing with complex (non-linear) rather that complicated (linear) systems. In complex systems changes that may seem minor can have significant effects. It is therefore necessary to try to understand how the whole system interacts before suggesting how it may be changed. There is clear evidence that a recommendation from one inquiry has been the source of a following unwanted event. Whether you think of it as a hypothesis, academic or lay theory, systems mapping or just as a Balance Scorecard “success map”, it is necessary to have an overview of the complete system that you are trying to improve. This is to enable those making recommendations to have a clearer idea of how the whole comes together and to foresee how even the simplest recommendation may have substantial unintended consequences.

Structuring of Recommendations

The way recommendations are structured often obscure their intent and the remedy they propose. From the evidence gathered for “it should never happen again”, the most common fault is for the recommendation to present some aspirational statement that does little more than state what everyone recognises as being the intent of the system. These do little more than identify a problem that is all too apparent. Many other recommendations just suggest a problem be examined. Comparatively few recommendations identify a solution to a problem and therefore qualify as learning. If inquiries are to claim to offer learning, they must offer learning that is specific. While much work is still needed on improving the structure of recommendations, a simple test of their utility is to apply the same test as one would for performance indicators.

Non-equidistancing of recommendations

When inquiries list their recommendations they are often presented as if they were standalone actions. This is clearly not the case as set out above. They are in fact a new input to an already complex system. If the desire to learn is driven by the hope that “it should never happen again” then there must be the question of how much does actioning the recommendation reduce the probability of the event happening again. Not all recommendations appear to be of the same value if learnt. Each recommendation will affect the probability of the unwanted event reoccurring differently; they may be seen to be at different distances from the solution. There is therefore a fundamental hierarchy of priority in all sets of recommendation that is never recognised. This needs to be addressed. However, there is another flaw, even more fundamental, in the recommendations offered. Recommendations are often produced as the “tweaks necessary to perfect the existing system. This approach is flawed as discussed next.

Wrong way of looking at the problem (Wrong Paradigm)

Recommendations are often produced as the “tweaks” necessary to perfect the existing system. The fundamental assumption here is that the system or process can be perfected. We would question this assumption. Our work suggests that two distinct world views exist. The first based around “what would be ideal” has been called the Perfect World paradigm. The second based around “the world as it really is” has been labelled Normal Chaos. Let us look at each in turn.

The Perfect World paradigm was first labelled as such in 2013 in the book “it should never happen again”. The basic proposition of this paradigm is that if we recruit the perfect people, produce perfect plans, train them perfectly, supply them with exactly the right resources (including perfect unambiguous information) and execute the plan flawlessly (eliminating all slips and lapses) then the desired outcome will be delivered. It is seen as being up to the organisation to create these perfect conditions. Within this paradigm is the belief that individuals should be able to learn, retain and use the knowledge they require perfectly. All of this perfection is supported by having perfect foresight and individuals should be blamed and punished where they fail to achieve these standards. Embedded within this construct is the desire to remove uncertainty and to control the world around us through the use of logic and the reduction of all problems, no matter how complicated, to a linear format where cause and effect are seen to be directly linked. The label Perfect World paradigm is used to reflect the phrase often heard when discussing failure; that is “but in a perfect world …”. However, as already stated, no system can be perfect as they will always fail in some ways.

The term Normal Chaos is an homage to Charles Perrow’s Normal Accident theory which points towards complexity (sometimes referred to as wicked-messy problems) as being a key source of failure. Complex systems are, by their very nature, non-linear. That is, within such systems, everything affects everything else to a greater or lesser extent. This complexity therefore leads to emergence. Emergence is when the complex interactions lead to unexpected results while the system is seen to be operating normally. Another feature of complex systems is disproportionality; this is when small inputs can dramatically change the outcome or when large inputs make little difference. With such systems it is therefore not possible to accurately predict change judged just on the size of the input measure. Finally, complex systems are open; such systems cannot be seen in isolation of the environment within which they operate.

COVID19 provides a good example of normal chaos. The way the disease spread around the world was not linear. The pattern was not a clear progression and could only be established in hindsight. Different unpredictable features of the crisis have appeared over time. It has grown disproportionately: what appeared to be a small local outbreak of the disease has had worldwide ramifications. And finally, we can clearly see that the UK economy is open to influence from around the world and cannot be managed in isolation.  

Within the Normal Chaos paradigm it is accepted that things will go wrong, things will be unknown and we will be forced to learn through trial and error.  The measure of systems is therefore not their ability to operate error free but their ability to achieve their goals despite their imperfections and any missteps taken. This idea is at the heart of thinking on robust and resilient systems.

There are many ways to cut a cake and there are many ways to see and make-sense of the world around us. In both it is however better to stick with a single approach if you are not to end up with a mess. Few would argue against the perfect being ideal however the question is whether this is possible to achieve. In trying to manage a crisis, such as COVID19, should we be driven by the impossible ideal or a more realistic understanding of the problem. The question for each person to answer is whether they should focus on the ideal or try to see and understand the world as it actually is.

The public debate on the COVID19 crisis is being hampered by the mixing of these two paradigms. Every day we see this happening at the daily briefing. It is common to see the journalists basing their questions within the perfect world paradigm and the respondent’s answer coming from the perspective of normal chaos. While this ritual dance may suit each’s purpose, it does little to clarify the real issues. At the least the watching public should recognise this dance for what it is.

The question for this joint committee is, does it wish to learn from the past in the way it conducts inquiries so as to not make the same mistakes as others have before and how does it see the world?

 Let us now look at the implications for learning about preparedness.  

Preparedness

In our examination to date of the state of pandemic preparedness, we have focused on the UK’s pandemic preparedness strategy (dated 2011) and the lessons learnt from previous pandemics. We see strategy documents as being key vehicles for capturing and distributing learning from the past. More precisely, we have focused on the recommendations of the Hine report (2010) and those produced by the Swine Flu Critical Care Clinical Group as examples of learning from the past. We have looked at whether the 2011 strategy refers to each recommendation, we have looked at the nature of those recommendations, and finally, we are looking at the effectiveness of these recommendations. In my previous work I have questioned how inquiries come to their conclusions and formulate their recommendations (see my analysis of the Grenfell report for an example.) Our goal for this research is to try to determine why recommendations may not achieve their intended goals and what might be done to improve the likelihood of success.

In terms of DIT, we see the Hine Report and the Swine Flu Group report as an opportunity to reset the pandemic clock (Stage I) with the production of the 2011 Pandemic Strategy. Stage II would encompass the period between 2011 and the present. In this time the model recognises two forces at work. The first comprises the efforts taken to maintain the strategy as produced and then to enhance it where necessary. The second set of forces are those that undermine the strategy.  This second set of forces may include changes to the environment that make the strategy as written irrelevant or they may be the organisation’s failure to take the action necessary to fulfil the strategy. Our research has shown that greater learning takes place if we focus on these types of ideas and ask what happened and why.

In the Perfect World sought by most inquiries, the organisation would have been expected to take the opportunity to capture perfect knowledge from the past, produce a perfect plan and then implement it perfectly. The fact that these House of Commons Committees are holding these hearings is evidence that it did not happen in this case. The question for the committee is therefore whether this is a fruitful line of inquiry or should we be seeking ways to make our imperfect system more robust and resilient.

To this end, we have examined the 2011strategy in order to identify weaknesses. In 1996 Lee Clarke & Charles Perrow wrote about ‘Prosaic organizational failure’. In this paper they identified what they called “fantasy documents”; they stated that these are organisational plans that are drawn from a quite unrealistic view or model of the organisation and are rarely tested against reality. The fantasy they identify is that everything will work right first time and that every contingency is known and has been prepared for. Our question was whether the 2011 pandemic strategy was a fantasy document.

The first step must be to establish what is expected of a strategy. While this subject is widely debated, we use the definition posited by Hoverstadt and Loh. They say it is "the way forces manoeuvre for advantage against an enemy (competition)." In the case of COVID19, the enemy is clearly the virus. They go on to say "strategy is about using the resources (including time) at your disposal to change your position relative to your environment (changing which structural couplings you have, or the nature of them, or both), so that you can thrive there on your own terms."  Therefore, in terms of COVID19, we would expect any pandemic strategy to describe how the organisation intended to use its resources to cope with a future pandemic.

From our examination of the 2011 pandemic strategy we identified the following weaknesses.

All the issues raised above are to be found within our previous (societal) experience. This must therefore raise the question as to why the department had failed to incorporate them into their strategy construction process. From this and other evidence collected, we would suggest that this points to a failure much larger than just one limited to this Department and this document. If this committee wishes to learn from the past it must reconsider how it extracts and disseminates learns learnt.

Conclusions

In this submission we would wish to draw the committee’s attention to flaws in the way society learns from the past. We provide examples of these flaws and suggest that anyone conducting an inquiry might like to learn from the past by taking note of them. We suggest that more inquiries deal with complex rather than merely complicated matters. We demonstrate how we use Disaster Incubation Theory as a way of handling this complexity. Finally, we look at how we might use the Pandemic Strategy as a vehicle around which we can collate and disseminate past learning in the hope that we, as a society, will do better next time.

To close, we have to note that a major factor in why lessons are not learnt is politics. We just need to go back to 2005, when the Inquiries Act was going through the Houses of Parliament and note the comments of Lord Heseltine. He suggests that if you were to have an inquiry, you should first reach your conclusion and then choose your chairman before setting up the inquiry. This was to ensure that the right people are blamed. While we accept that politics are a necessary consideration, we must also be aware and guard against its more malign effects if we truly wish to learn from this resource intensive process.

 

Nov 2020

Page 7 of 7