The point is mathematically simple, but does not register easily. I’ve enjoyed giving graduate students in mathematics the following quiz (to be answered intuitively, on the spot). In a Gaussian world, the probability of exceeding one standard deviation is around 16 percent. What are the odds of exceeding it under a distribution of fatter tails (with the same mean and variance)? The right answer: lower, not higher—the number of deviations drops, but the few that take place matter more. It was puzzling to see that most graduate students get it wrong.
Back to stress testing again. At the time of writing, the U.S. government is having financial institutions stress-tested by assuming large deviations and checking the results against the capitalization of these firms. But the problem is, Where did they get the numbers? From the past? This is so flawed, since the past, as we saw, is no indication of future deviations in Extremistan. This comes from the atypicality of extreme deviations. My experience of stress testing is that it reveals little about the risks—but the risks can be used to assess the degree of model error.
Psychology of Perception of Deviations
Fragility of Intuitions About the Typicality of the Move . Dan Goldstein and I ran a series of experiments about the intuitions of agents concerning such conditional expectations. We posed questions of the following sort: What is the average height of humans who are taller than six feet? What the average weight of people heavier than 250 pounds? We tried with a collection of variables from Mediocristan, including the above-mentioned height and weight, to which we added age, and we asked participants to guess variables from Extremistan, such as market capitalization (what is the average size of companies with capitalization in excess of $5 billion?) and stock performance. The results show that, clearly, we have good intuitions when it comes to Mediocristan, but horribly poor ones when it comes to Extremistan—yet economic life is almost all Extremistan. We do not have good intuition for that atypicality of large deviations. This explains both foolish risk taking and how people can underestimate opportunities.
Framing the Risks . Mathematically equivalent statements, I showed earlier with my example of survival rates, are not psychologically so. Worse, even professionals are fooled and base their decisions on their perceptual errors. Our research shows that the way a risk is framed sharply influences people’s understanding of it. If you say that, on average, investors will lose all their money every thirty years, they are more likely to invest than if you tell them they have a 3.3 percent chance of losing a certain amount every year.
The same is true of airplane rides. We have asked experimental participants: “You are on vacation in a foreign country and are considering flying a local airline to see a special island. Safety statistics show that, if you fly once a year, there will be on average one crash every thousand years on this airline. If you don’t take the trip, it is unlikely you’ll visit this part of the world again. Would you take the flight?” All the respondents said they would. But when we changed the second sentence so it read, “Safety statistics show that, on average, one in a thousand flights on this airline have crashed,” only 70 percent said they would take the flight. In both cases, the chance of a crash is 1 in 1,000; the latter formulation simply sounds more risky.
THE PROBLEM OF INDUCTION AND CAUSATION IN THE COMPLEX DOMAIN
What Is Complexity? I will simplify here with a functional definition of complexity—among many more complete ones. A complex domain is characterized by the following: there is a great degree of interdependence between its elements, both temporal (a variable depends on its past changes), horizontal (variables depend on one another), and diagonal (variable A depends on the past history of variable B). As a result of this interdependence, mechanisms are subjected to positive, reinforcing feedback loops, which cause “fat tails.” That is, they prevent the working of the Central Limit Theorem that, as we saw in Chapter 15, establishes Mediocristan thin tails under summation and aggregation of elements and causes “convergence to the Gaussian.” In lay terms, moves are exacerbated over time instead of being dampened by counterbalancing forces. Finally, we have nonlinearities that accentuate the fat tails.
So, complexity implies Extremistan. (The opposite is not necessarily true.)
As a researcher, I have only focused on the Extemistan element of complexity theory, ignoring the other elements except as a backup for my considerations of unpredictability. But complexity has other consequences for the conventional analyses, and for causation.
Induction
Let us look again, from a certain angle, at the problem of “induction.” It becomes one step beyond archaic in a modern environment, making the Black Swan problem even more severe. Simply, in a complex domain, the discussion of induction versus deduction becomes too marginal to the real problems (except for a limited subset of variables, even then); the entire Aristotelian distinction misses an important dimension (similar to the one discussed earlier concerning the atypicality of events in Extremistan). Even other notions such as “cause” take on a different meaning, particularly in the presence of circular causality and interdependence. *The probabilistic equivalent is the move from a conventional random walk model (with a random variable moving in a fixed terrain and not interacting with other variables around it), to percolation models (where the terrain itself is stochastic, with different variables acting on one another).
Driving the School Bus Blindfolded
Alas, at the time of writing, the economics establishment is still ignorant of the presence of complexity, which degrades predictability. I will not get too involved in my outrage—instead of doing a second deserto , Mark Spitznagel and I are designing another risk management program to robustify portfolios against model error, error mostly stemming from the government’s error in the projection of deficits, leading to excessive borrowing and possible hyperinflation.
I was once at the World Economic Forum in Davos; at one of my sessions, I illustrated interdependence in a complex system and the degradation of forecasting, with the following scheme: unemployment in New York triggered by Wall Street losses, percolating and generating unemployment in, say, China, then percolating back into unemployment in New York, is not analyzable analytically, because the feedback loops produced monstrous estimation errors. I used the notion of “convexity,” a disproportionate nonlinear response stemming from a variation in input (as the tools for measuring error rates go out of the window in the presence of convexity). Stanley Fisher, the head of the central bank of Israel, former IMF hotshot, co-author of a classic macroeconomics textbook, came to talk to me after the session to critique my point about such feedback loops causing unpredictability. He explained that we had input-output matrices that were good at calculating such feedbacks, and he cited work honored by the “Nobel” in economics. The economist in question was one Vassili Leontieff, I presume. I looked at him with the look “He is arrogant, but does not know enough to understand that he is not even wrong” (needless to say, Fisher was one of those who did not see the crisis coming). It was hard to get the message across that, even if econometric methods could track the effects of feedback loops in normal times (natural, since errors are small), such models said nothing about large disturbances. And I will repeat, large disturbances are everything in Extremistan.
Читать дальше