This point—that things have a bias to appear more stable and less risky in the past, leading us to surprises—needs to be taken seriously, particularly in the medical field. The history of epidemics, narrowly studied, does not suggest the risks of the great plague to come that will dominate the planet. Also I am convinced that in doing what we are to the environment, we greatly underestimate the potential instability we will experience somewhere from the cumulative damage we have done to nature.
One illustration of this point is playing out just now. At the time of writing, the stock market has proved much, much riskier than innocent retirees were led to believe from historical discourses showing a hundred years of data. It is down close to 23 percent for the decade ending in 2010, while the retirees were told by finance charlatans that it was expected to rise by around 75 percent over that time span. This has bankrupted many pension plans (and the largest car company in the world), for they truly bought into that “empirical” story—and of course it has caused many disappointed people to delay their retirement. Consider that we are suckers and will gravitate toward those variables that are unstable but that appear stable .
Preasymptotics . Let us return to Platonicity with a discussion of preasymptotics, what happens in the short term. Theories are, of course, a bad thing to start with, but they can be worse in some situations when they were derived in idealized situations, the asymptote, but are used outside the asymptote (its limit, say infinity or the infinitesimal). Mandelbrot and I showed how some asymptotic properties do work well preasymptotically in Mediocristan, which is why casinos do well; matters are different in Extremistan.
Most statistical education is based on these asymptotic, Platonic properties, yet we live in the real world, which rarely resembles the asymptote. Statistical theorists know it, or claim to know it, but not your regular user of statistics who talks about “evidence” while writing papers. Furthermore, this compounds what I called the ludic fallacy: most of what students of mathematical statistics do is assume a structure similar to the closed structures of games, typically with a priori known probability. Yet the problem we have is not so much making computations once you know the probabilities, but finding the true distribution for the horizon concerned. Many of our knowledge problems come from this tension between a priori and a posteriori.
Proof in the Flesh
There is no reliable way to compute small probabilities . I argued philosophically the difficulty of computing the odds of rare events. Using almost all available economic data—and I used economic data because that’s where the clean data was—I showed the impossibility of computing from the data the measure of how far away from the Gaussian one was. There is a measure called kurtosis that the reader does not need to bother with, but that represents “how fat the tails are,” that is, how much rare events play a role. Well, often, with ten thousand pieces of data, forty years of daily observations, one single observation represents 90 percent of the kurtosis! Sampling error is too large for any statistical inference about how non-Gaussian something is, meaning that if you miss a single number, you miss the whole thing. The instability of the kurtosis implies that a certain class of statistical measures should be totally disallowed. This proves that everything relying on “standard deviation,” “variance,” “least square deviation,” etc., is bogus.
Further, I also showed that it is impossible to use fractals to get acceptably precise probabilities—simply because a very small change in what I called the “tail exponent” in Chapter 16, coming from observation error, would make the probabilities change by a factor of 10, perhaps more.
Implication: the need to avoid exposure to small probabilities in a certain domain. We simply cannot compute them.
FALLACY OF THE SINGLE EVENT PROBABILITY
Recall from Chapter 10, with the example of the behavior of life expectancy, that the conditional expectation of additional life drops as one advances in age (as you get older you are expected to live a smaller number of years; this comes from the fact that there is an asymptotic “soft” ceiling to how old a human can get). Expressing it in units of standard deviations, the conditional expectation of a Mediocristani Gaussian variable, conditional on it being higher than a threshold of 0, is .8 (standard deviations). Conditional on it being higher than a threshold of 1, it will be 1.52. Conditional on it being higher than 2, it will be 2.37. As you see, the two numbers should converge to each other as the deviations become large, so conditional on it being higher than 10 standard deviations, a random variable will be expected to be just 10.
In Extremistan, things work differently. The conditional expectation of an increase in a random variable does not converge to the threshold as the variable gets larger. In the real world, say with stock returns (and all economic variables), conditional on a loss being worse than 5 units, using any unit of measure (it makes little difference), it will be around 8 units. Conditional that a move is more than 50 units, it should be around 80 units, and if we go all the way until the sample is depleted, the average move worse than 100 units is 250 units! This extends to all areas in which I found sufficient samples. This tells us that there is “no” typical failure and “no” typical success. You may be able to predict the occurrence of a war, but you will not be able to gauge its effect! Conditional on a war killing more than 5 million people, it should kill around 10 million (or more). Conditional on it killing more than 500 million, it would kill a billion (or more, we don’t know). You may correctly predict that a skilled person will get “rich,” but, conditional on his making it, his wealth can reach $1 million, $10 million, $1 billion, $10 billion—there is no typical number. We have data, for instance, for predictions of drug sales, conditional on getting things right. Sales estimates are totally uncorrelated to actual sales—some drugs that were correctly predicted to be successful had their sales underestimated by up to 22 times.
This absence of “typical” events in Extremistan is what makes something called prediction markets (in which people are assumed to make bets on events) ludicrous, as they consider events to be binary. “A war” is meaningless: you need to estimate its damage—and no damage is typical. Many predicted that the First World War would occur, but nobody really predicted its magnitude. One of the reasons economics does not work is that the literature is almost completely blind to this point.
Accordingly, Ferguson’s methodology (mentioned in Chapter 1) in looking at the prediction of events as expressed in the price of war bonds is sounder than simply counting predictions, because a bond, reflecting the costs to the governments involved in a war, is priced to cover the probability of an event times its consequences, not just the probability of an event. So we should not focus on whether someone “predicted” an event without his statement having consequences attached to it.
Associated with the previous fallacy is the mistake of thinking that my message is that these Black Swans are necessarily more probable than assumed by conventional methods. They are mostly less probable, but have bigger effects. Consider that, in a winner-take-all environment, such as the arts, the odds of success are low, since there are fewer successful people, but the payoff is disproportionately high. So, in a fat-tailed environment, rare events can be less frequent (their probability is lower), but they are so powerful that their contribution to the total pie is more substantial.
Читать дальше