The Australians had actually built a symbol of the epistemic arrogance of the human race. The story is as follows. The Sydney Opera House was supposed to open in early 1963 at a cost of AU$ 7 million. It finally opened its doors more than ten years later, and, although it was a less ambitious version than initially envisioned, it ended up costing around AU$ 104 million. While there are far worse cases of planning failures (namely the Soviet Union), or failures to forecast (all important historical events), the Sydney Opera House provides an aesthetic (at least in principle) illustration of the difficulties. This opera-house story is the mildest of all the distortions we will discuss in this section (it was only money, and it did not cause the spilling of innocent blood). But it is nevertheless emblematic.
This chapter has two topics. First, we are demonstrably arrogant about what we think we know. We certainly know a lot, but we have a built-in tendency to think that we know a little bit more than we actually do, enough of that little bit to occasionally get into serious trouble. We shall see how you can verify, even measure, such arrogance in your own living room.
Second, we will look at the implications of this arrogance for all the activities involving prediction.
Why on earth do we predict so much? Worse, even, and more interesting: Why don’t we talk about our record in predicting? Why don’t we see how we (almost) always miss the big events? I call this the scandal of prediction.
ON THE VAGUENESS OF CATHERINE’S LOVER COUNT
Let us examine what I call epistemic arrogance , literally, our hubris concerning the limits of our knowledge. Epistēmē is a Greek word that refers to knowledge; giving a Greek name to an abstract concept makes it sound important. True, our knowledge does grow, but it is threatened by greater increases in confidence, which make our increase in knowledge at the same time an increase in confusion, ignorance, and conceit.
Take a room full of people. Randomly pick a number. The number could correspond to anything: the proportion of psychopathic stockbrokers in western Ukraine, the sales of this book during the months with r in them, the average IQ of business-book editors (or business writers), the number of lovers of Catherine II of Russia, et cetera. Ask each person in the room to independently estimate a range of possible values for that number set in such a way that they believe that they have a 98 percent chance of being right, and less than 2 percent chance of being wrong. In other words, whatever they are guessing has about a 2 percent chance to fall outside their range. For example:
“I am 98 percent confident that the population of Rajastan is between 15 and 23 million.”
“I am 98 percent confident that Catherine II of Russia had between 34 and 63 lovers.”
You can make inferences about human nature by counting how many people in your sample guessed wrong; it is not expected to be too much higher than two out of a hundred participants. Note that the subjects (your victims) are free to set their range as wide as they want: you are not trying to gauge their knowledge but rather their evaluation of their own knowledge .
Now, the results. Like many things in life, the discovery was unplanned, serendipitous, surprising, and took a while to digest. Legend has it that Albert and Raiffa, the researchers who noticed it, were actually looking for something quite different, and more boring: how humans figure out probabilities in their decision making when uncertainty is involved (what the learned call calibrating) . The researchers came out befuddled. The 2 percent error rate turned out to be close to 45 percent in the population being tested! It is quite telling that the first sample consisted of Harvard Business School students, a breed not particularly renowned for their humility or introspective orientation. MBAs are particularly nasty in this regard, which might explain their business success. Later studies document more humility, or rather a smaller degree of arrogance, in other populations. Janitors and cabdrivers are rather humble. Politicians and corporate executives, alas … I’ll leave them for later.
Are we twenty-two times too comfortable with what we know? It seems so.
This experiment has been replicated dozens of times, across populations, professions, and cultures, and just about every empirical psychologist and decision theorist has tried it on his class to show his students the big problem of humankind: we are simply not wise enough to be trusted with knowledge. The intended 2 percent error rate usually turns out to be between 15 percent and 30 percent, depending on the population and the subject matter.
I have tested myself and, sure enough, failed, even while consciously trying to be humble by carefully setting a wide range—and yet such underestimation happens to be, as we will see, the core of my professional activities. This bias seems present in all cultures, even those that favor humility—there may be no consequential difference between downtown Kuala Lumpur and the ancient settlement of Amioun, (currently) Lebanon. Yesterday afternoon, I gave a workshop in London, and had been mentally writing on my way to the venue because the cabdriver had an above-average ability to “find traffic.” I decided to make a quick experiment during my talk.
I asked the participants to take a stab at a range for the number of books in Umberto Eco’s library, which, as we know from the introduction to Part One, contains 30,000 volumes. Of the sixty attendees, not a single one made the range wide enough to include the actual number (the 2 percent error rate became 100 percent). This case may be an aberration, but the distortion is exacerbated with quantities that are out of the ordinary. Interestingly, the crowd erred on the very high and the very low sides: some set their ranges at 2,000 to 4,000; others at 300,000 to 600,000.
True, someone warned about the nature of the test can play it safe and set the range between zero and infinity; but this would no longer be “calibrating”—that person would not be conveying any information, and could not produce an informed decision in such a manner. In this case it is more honorable to just say, “I don’t want to play the game; I have no clue.”
It is not uncommon to find counterexamples, people who overshoot in the opposite direction and actually overestimate their error rate: you may have a cousin particularly careful in what he says, or you may remember that college biology professor who exhibited pathological humility; the tendency that I am discussing here applies to the average of the population, not to every single individual. There are sufficient variations around the average to warrant occasional counterexamples. Such people are in the minority—and, sadly, since they do not easily achieve prominence, they do not seem to play too influential a role in society.
Epistemic arrogance bears a double effect: we overestimate what we know, and underestimate uncertainty, by compressing the range of possible uncertain states (i.e., by reducing the space of the unknown).
The applications of this distortion extend beyond the mere pursuit of knowledge: just look into the lives of the people around you. Literally any decision pertaining to the future is likely to be infected by it. Our human race is affected by a chronic underestimation of the possibility of the future straying from the course initially envisioned (in addition to other biases that sometimes exert a compounding effect). To take an obvious example, think about how many people divorce. Almost all of them are acquainted with the statistic that between one-third and one-half of all marriages fail, something the parties involved did not forecast while tying the knot. Of course, “not us,” because “we get along so well” (as if others tying the knot got along poorly).
Читать дальше