The problem is that if I am right, Fisher’s textbook, and his colleagues’ textbooks, should be dispensed with. As should almost every prediction method that uses mathematical equations.
I tried to explain the problems of errors in monetary policy under nonlinearities: you keep adding money with no result … until there is hyperinflation. Or nothing. Governments should not be given toys they do not understand.
* The “a priori” I am using here differs from the philosophical “a priori” belief, in the sense that it is a theoretical starting point, not a belief that is nondefeasible by experience.
* Interestingly, the famous paper by Reverend Bayes that led to what we call Bayesian inference did not give us “probability” but expectation (expected average). Statisticians had difficulties with the concept so extracted probability from payoff. Unfortunately, this reduction led to the reification of the concept of probability, its adherents fogetting that probability is not natural in real life.
* The intelligent reader who gets the idea that rare events are not computable can skip the remaining parts of this section, which will be extremely technical. It is meant to prove a point to those who studied too much to be able to see things with clarity.
* This is an extremely technical point (to skip). The problem of the unknown distribution resembles, in a way, Bertrand Russell’s central difficulty in logic with the “this sentence is true” issue—a sentence cannot contain its own truth predicate. We need to apply Tarski’s solution: for every language, a metalanguage will take care of predicates of true and false about that language. With probability, simply, a metaprobability assigns degrees of credence to every probability—or, more generally, a probability distribution needs to be subordinated to a metaprobability distribution giving, say, the probability of a probability distribution being the wrong one. But luckily I have been able to express this with the available mathematical tools. I have played with this metadistribution problem in the past, in my book Dynamic Hedging (1997). I started putting an error rate on the Gaussian (by having my true distribution draw from two or more Gaussians, each with different parameters) leading to nested distributions almost invariably producing some class of Extremistan. So, to me, the variance of the distribution is, epistemologically, a measure of lack of knowledge about the average; hence the variance of variance is, epistemologically, a measure of lack of knowledge about the lack of knowledge of the mean—and the variance of variance is analog to the fourth moment of the distribution, and its kurtosis, which makes such uncertainty easy to express mathematically. This shows that: fat tails = lack of knowledge about lack of knowledge.
† A Gaussian distribution is parsimonious (with only two parameters to fit). But the problem of adding layers of possible jumps, each with a different probability, opens up endless possibilities of combinations of parameters.
‡ One of the most common (but useless) comments I hear is that some solutions can come from “robust statistics.” I wonder how using these techniques can create information where there is none.
* One consequence of the absence of “typicality” for an event on causality is as follows: Say an event can cause a “war.” As we saw, such war will still be undefined, since it may kill three people or a billion. So even in situations where we can identify cause and effect, we will know little, since the effect will remain atypical. I had severe problems explaining this to historians (except for Niall Ferguson) and political scientists (except for Jon Elster). Please explain this point (very politely) to your professor of Near and Middle Eastern studies.
VI
THE FOURTH QUADRANT, THE SOLUTION TO THAT MOST USEFUL OF PROBLEMS *
Did Aristotle walk slowly?—Will they follow the principles?—How to manufacture a Ponzi scheme and get credit for it

It is much more sound to take risks you can measure than to measure the risks you are taking.
There is a specific spot on the map, the Fourth Quadrant, in which the problem of induction, the pitfalls of empiricism come alive—the place where, I repeat, absence of evidence does not line up with evidence of absence. This section will allow us to base our decision on sounder epistemological grounds.
David Freedman, RIP
First, I need to pay homage to someone to whom knowledge has a large debt. The late Berkeley statistician David Freedman, who perhaps better than anyone uncovered the defects of statistical knowledge, and the inapplicability of some of the methods, sent me a farewell gift. He was supposed to be present at the meeting of the American Statistical Association that I mentioned earlier, but canceled because of illness. But he prepared me for the meeting, with a message that changed the course of the Black Swan idea: be prepared; they will provide you with a certain set of self-serving arguments and you need to respond to them. The arguments were listed in his book in a section called “The Modelers’ Response.” I list most of them below.
The Modelers’ Response: We know all that. Nothing is perfect. The assumptions are reasonable. The assumptions don’t matter. The assumptions are conservative. You can’t prove the assumptions are wrong. We’re only doing what everybody else does. The decision-maker has to be better off with us than without us. The models aren’t totally useless. You have to do the best you can with the data. You have to make assumptions in order to make progress. You have to give the models the benefit of the doubt. Where’s the harm?
This gave me the idea of using the approach “This is where your tools work,” instead of the “This is wrong” approach I was using before. The change in style is what earned me the hugs and supply of Diet Coke and helped me get my message across. David’s comments also inspired me to focus more on iatrogenics, harm caused by the need to use quantitative models.
David Freedman passed away a few weeks after the meeting. *Thank you, David. You were there when the Black Swan needed you. May you and your memory rest in peace.
Which brings us to the solution. After all this undecidability, the situation is not dire at all. Why? We, simply, can build a map of where these errors are more severe, what to watch out for.
DECISIONS
When you look at the generator of events, you can tell a priori which environment can deliver large events (Extremistan) and which environment cannot deliver them (Mediocristan). This is the only a priori assumption we need to make. The only one.
So that’s that.
I . The first type of decision is simple, leading to a “binary” exposure: that is, you just care about whether something is true or false. Very true or very false does not bring you additional benefits or damage. Binary expoures do not depend on high-impact events as their payoff is limited. Someone is either pregnant or not pregnant, so if the person is “extremely pregnant” the payoff would be the same as if she were “slightly pregnant.” A statement is “true” or “false” with some confidence interval. (I call these M0 as, more technically, they depend on what is called the zero thmoment, namely on the probability of events, and not on their magnitude—you just care about “raw” probability.) A biological experiment in the laboratory and a bet with a friend about the outcome of a soccer game belong to this category.
Clearly, binary outcomes are not very prevalent in life; they mostly exist in laboratory experiments and in research papers. In life, payoffs are usually open-ended, or, at least, variable.
Читать дальше