* Medieval medicine was also based on equilibrium ideas when it was top-down and similar to theology. Luckily its practitioners went out of business, as they could not compete with the bottom-up surgeons, ecologically driven former barbers who gained clinical experience, and after whom a-Platonic clinical science was born. If I am alive, today, it is because scholastic top-down medicine went out of business a few centuries ago.
Chapter Eighteen

THE UNCERTAINTY OF THE PHONY
Philosophers in the wrong places—Uncertainty about (mostly) lunch—What I don’t care about—Education and intelligence

This final chapter of Part Three focuses on a major ramification of the ludic fallacy: how those whose job it is to make us aware of uncertainty fail us and divert us into bogus certainties through the back door.
LUDIC FALLACY REDUX
I have explained the ludic fallacy with the casino story, and have insisted that the sterilized randomness of games does not resemble randomness in real life. Look again at Figure 7 in Chapter 15. The dice average out so quickly that I can say with certainty that the casino will beat me in the very near long run at, say, roulette, as the noise will cancel out, though not the skills (here, the casino’s advantage). The more you extend the period (or reduce the size of the bets) the more randomness, by virtue of averaging, drops out of these gambling constructs.
The ludic fallacy is present in the following chance setups: random walk, dice throwing, coin flipping, the infamous digital “heads or tails” expressed as 0 or 1, Brownian motion (which corresponds to the movement of pollen particles in water), and similar examples. These setups generate a quality of randomness that does not even qualify as randomness— protorandomness would be a more appropriate designation. At their core, all theories built around the ludic fallacy ignore a layer of uncertainty. Worse, their proponents do not know it!
One severe application of such focus on small, as opposed to large, uncertainty concerns the hackneyed greater uncertainty principle .
Find the Phony
The greater uncertainty principle states that in quantum physics, one cannot measure certain pairs of values (with arbitrary precision), such as the position and momentum of particles. You will hit a lower bound of measurement: what you gain in the precision of one, you lose in the other. So there is an incompressible uncertainty that, in theory, will defy science and forever remain an uncertainty. This minimum uncertainty was discovered by Werner Heisenberg in 1927. I find it ludicrous to present the uncertainty principle as having anything to do with uncertainty. Why? First, this uncertainty is Gaussian. On average, it will disappear—recall that no one person’s weight will significantly change the total weight of a thousand people. We may always remain uncertain about the future positions of small particles, but these uncertainties are very small and very numerous, and they average out—for Pluto’s sake, they average out! They obey the law of large numbers we discussed in Chapter 15. Most other types of randomness do not average out! If there is one thing on this planet that is not so uncertain, it is the behavior of a collection of subatomic particles! Why? Because, as I have said earlier, when you look at an object, composed of a collection of particles, the fluctuations of the particles tend to balance out.
But political, social, and weather events do not have this handy property, and we patently cannot predict them, so when you hear “experts” presenting the problems of uncertainty in terms of subatomic particles, odds are that the expert is a phony. As a matter of fact, this may be the best way to spot a phony.
I often hear people say, “Of course there are limits to our knowledge,” then invoke the greater uncertainty principle as they try to explain that “we cannot model everything”—I have heard such types as the economist Myron Scholes say this at conferences. But I am sitting here in New York, in August 2006, trying to go to my ancestral village of Amioun, Lebanon. Beirut’s airport is closed owing to the conflict between Israel and the Shiite militia Hezbollah. There is no published airline schedule that will inform me when the war will end, if it ends. I can’t figure out if my house will be standing, if Amioun will still be on the map—recall that the family house was destroyed once before. I can’t figure out whether the war is going to degenerate into something even more severe. Looking into the outcome of the war, with all my relatives, friends, and property exposed to it, I face true limits of knowledge. Can someone explain to me why I should care about subatomic particles that, anyway, converge to a Gaussian? People can’t predict how long they will be happy with recently acquired objects, how long their marriages will last, how their new jobs will turn out, yet it’s subatomic particles that they cite as “limits of prediction.” They’re ignoring a mammoth standing in front of them in favor of matter even a microscope would not allow them to see.
Can Philosophers Be Dangerous to Society?
I will go further: people who worry about pennies instead of dollars can be dangerous to society. They mean well, but, invoking my Bastiat argument of Chapter 8, they are a threat to us. They are wasting our studies of uncertainty by focusing on the insignificant. Our resources (both cognitive and scientific) are limited, perhaps too limited. Those who distract us increase the risk of Black Swans.
This commoditization of the notion of uncertainty as symptomatic of Black Swan blindness is worth discussing further here.
Given that people in finance and economics are seeped in the Gaussian to the point of choking on it, I looked for financial economists with philosophical bents to see how their critical thinking allows them to handle this problem. I found a few. One such person got a PhD in philosophy, then, four years later, another in finance; he published papers in both fields, as well as numerous textbooks in finance. But I was disheartened by him: he seemed to have compartmentalized his ideas on uncertainty so that he had two distinct professions: philosophy and quantitative finance. The problem of induction, Mediocristan, epistemic opacity, or the offensive assumption of the Gaussian—these did not hit him as true problems. His numerous textbooks drilled Gaussian methods into students’ heads, as though their author had forgotten that he was a philosopher. Then he promptly remembered that he was when writing philosophy texts on seemingly scholarly matters.
The same context specificity leads people to take the escalator to the StairMasters, but the philosopher’s case is far, far more dangerous since he uses up our storage for critical thinking in a sterile occupation. Philosophers like to practice philosophical thinking on me-too subjects that other philosophers call philosophy, and they leave their minds at the door when they are outside of these subjects.
The Problem of Practice
As much as I rail against the bell curve, Platonicity, and the ludic fallacy, my principal problem is not so much with statisticians—after all, these are computing people, not thinkers. We should be far less tolerant of philosophers, with their bureaucratic apparatchiks closing our minds. Philosophers, the watchdogs of critical thinking, have duties beyond those of other professions.
Читать дальше