“There are some people who, if they don’t already know, you can’t tell ’em,” as the great philosopher of uncertainty Yogi Berra once said. Do not waste your time trying to fight forecasters, stock analysts, economists, and social scientists, except to play pranks on them . They are considerably easy to make fun of, and many get angry quite readily. It is ineffective to moan about unpredictability: people will continue to predict foolishly, especially if they are paid for it, and you cannot put an end to institutionalized frauds. If you ever do have to heed a forecast, keep in mind that its accuracy degrades rapidly as you extend it through time.
If you hear a “prominent” economist using the word equilibrium , or normal distribution , do not argue with him; just ignore him, or try to put a rat down his shirt.
The Great Asymmetry
All these recommendations have one point in common: asymmetry. Put yourself in situations where favorable consequences are much larger than unfavorable ones.
Indeed, the notion of asymmetric outcomes is the central idea of this book: I will never get to know the unknown since, by definition, it is unknown. However, I can always guess how it might affect me, and I should base my decisions around that.
This idea is often erroneously called Pascal’s wager, after the philosopher and (thinking) mathematician Blaise Pascal. He presented it something like this: I do not know whether God exists, but I know that I have nothing to gain from being an atheist if he does not exist, whereas I have plenty to lose if he does. Hence, this justifies my belief in God.
Pascal’s argument is severely flawed theologically: one has to be naïve enough to believe that God would not penalize us for false belief. Unless, of course, one is taking the quite restrictive view of a naïve God. (Bertrand Russell was reported to have claimed that God would need to have created fools for Pascal’s argument to work.)
But the idea behind Pascal’s wager has fundamental applications outside of theology. It stands the entire notion of knowledge on its head. It eliminates the need for us to understand the probabilities of a rare event (there are fundamental limits to our knowledge of these); rather, we can focus on the payoff and benefits of an event if it takes place. The probabilities of very rare events are not computable; the effect of an event on us is considerably easier to ascertain (the rarer the event, the fuzzier the odds). We can have a clear idea of the consequences of an event, even if we do not know how likely it is to occur. I don’t know the odds of an earthquake, but I can imagine how San Francisco might be affected by one. This idea that in order to make a decision you need to focus on the consequences (which you can know) rather than the probability (which you can’t know) is the central idea of uncertainty . Much of my life is based on it.
You can build an overall theory of decision making on this idea. All you have to do is mitigate the consequences. As I said, if my portfolio is exposed to a market crash, the odds of which I can’t compute, all I have to do is buy insurance, or get out and invest the amounts I am not willing to ever lose in less risky securities.
Effectively, if free markets have been successful, it is precisely because they allow the trial-and-error process I call “stochastic tinkering” on the part of competing individual operators who fall for the narrative fallacy—but are effectively collectively partaking of a grand project. We are increasingly learning to practice stochastic tinkering without knowing it—thanks to overconfident entrepreneurs, naïve investors, greedy investment bankers, and aggressive venture capitalists brought together by the free-market system. The next chapter shows why I am optimistic that the academy is losing its power and ability to put knowledge in straitjackets and that more out-of-the-box knowledge will be generated Wiki-style.
In the end we are being driven by history, all the while thinking that we are doing the driving.
I’ll sum up this long section on prediction by stating that we can easily narrow down the reasons we can’t figure out what’s going on. There are: a) epistemic arrogance and our corresponding future blindness; b) the Platonic notion of categories, or how people are fooled by reductions, particularly if they have an academic degree in an expert-free discipline; and, finally c) flawed tools of inference, particularly the Black Swan–free tools from Mediocristan.
In the next section we will go deeper, much deeper, into these tools from Mediocristan, into the “plumbing,” so to speak. Some readers may see it as an appendix; others may consider it the heart of the book.
* This chapter provides a general conclusion for those who by now say, “Taleb, I get the point, but what should I do?” My answer is that if you got the point, you are pretty much there. But here is a nudge.
* Dan Gilbert showed in a famous paper, “How Mental Systems Believe,” that we are not natural skeptics and that not believing required an expenditure of mental effort.
* Make sure that you have plenty of these small bets; avoid being blinded by the vividness of one single Black Swan. Have as many of these small bets as you can conceivably have. Even venture capital firms fall for the narrative fallacy with a few stories that “make sense” to them; they do not have as many bets as they should. If venture capital firms are profitable, it is not because of the stories they have in their heads, but because they are exposed to unplanned rare events.
* There is a finer epistemological point. Remember that in a virtuous Black Swan business, what the past did not reveal is almost certainly going to be good for you. When you look at past biotech revenues, you do not see the superblockbuster in them, and owing to the potential for a cure for cancer (or headaches, or baldness, or bad sense of humor, etc.), there is a small probability that the sales in that industry may turn out to be monstrous, far larger than might be expected. On the other hand, consider negative Black Swan businesses. The track record you see is likely to overestimate the properties. Recall the 1982 blowup of banks: they appeared to the naïve observer to be more profitable than they seemed. Insurance companies are of two kinds: the regular diversifiable kind that belongs to Mediocristan (say, life insurance) and the more critical and explosive Black Swan–prone risks that are usually sold to reinsurers. According to the data, reinsurers have lost money on underwriting over the past couple of decades, but, unlike bankers, they are introspective enough to know that it actually could have been far worse, because the past twenty years did not have a big catastrophe, and all you need is one of those per century to kiss the business good-bye. Many finance academics doing “valuation” on insurance seem to have missed the point.

It’s time to deal in some depth with four final items that bear on our Black Swan.
Primo , I have said earlier that the world is moving deeper into Extremistan, that it is less and less governed by Mediocristan—in fact, this idea is more subtle than that. I will show how and present the various ideas we have about the formation of inequality. Secondo , I have been describing the Gaussian bell curve as a contagious and severe delusion, and it is time to get into that point in some depth. Terso , I will present what I call Mandelbrotian, or fractal, randomness. Remember that for an event to be a Black Swan, it does not just have to be rare, or just wild; it has to be unexpected, has to lie outside our tunnel of possibilities. You must be a sucker for it. As it happens, many rare events can yield their structure to us: it is not easy to compute their probability, but it is easy to get a general idea about the possibility of their occurrence. We can turn these Black Swans into Gray Swans, so to speak, reducing their surprise effect. A person aware of the possibility of such events can come to belong to the non-sucker variety.
Читать дальше