Douglas W. Hubbard
February 2020
Many people helped me with this book in many ways. Some I have interviewed for this book, some have provided their own research (even some prior to publication), and others have spent time reviewing my manuscript and offering many suggestions for improvement. In particular, I would like to thank Dr. Sam Savage of Stanford University, who has been extraordinarily helpful on all these counts.
Reed Augliere |
Jim Dyer |
Harry Markowitz |
David Bearden |
Jim Franklin |
Jason Mewis |
Christopher “Kip” Bohn |
Andrew Freeman |
Bill Panning |
Andrew Braden |
Vic Fricas |
Sam Savage |
David Budescu |
Dan Garrow |
John Schuyler |
Bob Clemen |
John Hester |
Yook Seng Kong |
Ray Covert |
Steve Hoye |
Thompson Terry |
Dennis William Cox |
David Hubbard |
David Vose |
Tony Cox |
Karen Jenni |
Stephen Wolfram |
Diana Del Bel Belluz |
Rick Julien |
Peter Alan Smith |
Jim DeLoach |
Daniel Kahneman |
Jack Jones |
Robin Dillon-Merrill |
Allen Kubitz |
Steve Roemerman |
Rob Donat |
Fiona MacMillan |
|
PART ONE An Introduction to the Crisis
CHAPTER 1 Healthy Skepticism for Risk Management
It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring .
—CARL SAGAN
Everything's fine today, that is our illusion .
—VOLTAIRE
What is your single biggest risk? How do you know? These are critical questions for any organization regardless of industry, size, structure, environment, political pressures, or changes in technology. Any attempt to manage risk in these organizations should involve answering these questions.
We need to ask hard questions about new and rapidly growing trends in management methods, especially when those methods are meant to help direct and protect major investments and inform key public policy. The application of healthy skepticism to risk management methods was long past due when I wrote the first edition of this book more than a decade ago.
The first edition of this book came out on the tail end of the Great Recession in 2008 and 2009. Since then, several major events have resulted in extraordinary losses both financially and in terms of human health and safety. Here are just a few:
Deepwater Horizon offshore oil spill (2010)
Fukushima Daiichi nuclear disaster (2011)
Flint Michigan water system contamination (starting 2012)
Samsung Galaxy Note 7 battery failures (2016)
Multiple large data breaches (Equifax, Anthem, Target, etc.)
Amtrak derailments/collisions (2018)
Events such as these and other natural, geopolitical, technological, and financial disasters in the beginning of the twenty-first century periodically accelerate (maybe only temporarily) interest in risk management among the public, businesses, and lawmakers. This continues to spur the development of several risk management methods.
The methods to determine risks vary greatly among organizations. Some of these methods—used to assess and mitigate risks of all sorts and sizes—are recent additions in the history of risk management and are growing in popularity. Some are well-established and highly regarded. Some take a very soft, qualitative approach and others are rigorously quantitative. If some of these are better, if some are fundamentally flawed, then we should want to know.
Actually, there is very convincing evidence about the effectiveness of different methods and this evidence is not just anecdotal. As we will see in this book, this evidence is based on detailed measurements in large controlled experiments. Some points about what works are even based on mathematical proofs. This will all be reviewed in much detail but, for now, I will skip ahead to the conclusion. Unfortunately, it is not good news.
I will make the case that most of the widely used methods are not based on any proven theories of risk analysis, and there is no real, scientific evidence that they result in a measurable improvement in decisions to manage risks. Where scientific data does exist, the data show that many of these methods fail to account for known sources of error in the analysis of risk or, worse yet, add error of their own .
Most managers would not know what they need to look for to evaluate a risk management method and, more likely than not, can be fooled by a kind of “analysis placebo effect” (more to come on that). 1 Even under the best circumstances, where the effectiveness of the risk management method itself was tracked closely and measured objectively, adequate evidence may not be available for some time.
A more typical circumstance, however, is that the risk management method itself has no performance measures at all, even in the most diligent, metrics-oriented organizations. This widespread inability to make the sometimes-difficult differentiation between methods that work and methods that don't work means that ineffectual methods are likely to spread. Once certain methods are adopted, institutional inertia cements them in place with the assistance of standards and vendors that refer to them as “best practices.” Sometimes they are even codified into law. Like a dangerous virus with a long incubation period, methods are passed from company to company with no early indicators of ill effects until it's too late.
The consequences of flawed but widely adopted methods are inevitably severe for organizations making critical decisions. Decisions regarding not only the financial security of a business but also the entire economy and even human lives are supported in large part by our assessment and management of risks. The reader may already start to see the answer to the first question at the beginning of this chapter, “What is your biggest risk?”
The year 2017 was remarkable for safety in commercial air travel. There was not a single fatality worldwide from an accident. Air travel had already been the safest form of travel for decades. Even so, luck had some part to play in the 2017 record, but that luck would not last. That same year, a new variation of the Boeing 737 MAX series passenger aircraft was introduced: the 737 MAX 8. Within twelve months of the initial roll out, well over one hundred MAX 8s were in service.
In 2018 and 2019, two crashes with the MAX 8, totaling 339 fatalities, showed that a particular category of failure was still very possible in air travel. Although the details of the two 737 crashes were still emerging as this book was written, it appears that it is an example of a common mode failure . In other words, the two crashes may be linked to the same cause. This is a term familiar to systems risk analysis in some areas of engineering, where several failures can have a common cause. This would be like a weak link in a chain, but where the weak link was part of multiple chains.
I had an indirect connection to another common mode failure in air travel forty years before this book came out. In July 1989, I was the commander of the Army Reserve unit in Sioux City, Iowa. It was the first day of our two-week annual training and I had already left for Fort McCoy, Wisconsin with a small group of support staff. The convoy of the rest of the unit was going to leave that afternoon, about five hours behind us. But just before the main body was ready to leave for annual training, the rest of my unit was deployed for a major local emergency.
Читать дальше