Of course, it is not so easy to “falsify,” i.e., to state that something is wrong with full certainty. Imperfections in your testing method may yield a mistaken “no.” The doctor discovering cancer cells might have faulty equipment causing optical illusions; or he could be a bell-curve-using economist disguised as a doctor. An eyewitness to a crime might be drunk. But it remains the case that you know what is wrong with a lot more confidence than you know what is right . All pieces of information are not equal in importance.
Popper introduced the mechanism of conjectures and refutations, which works as follows: you formulate a (bold) conjecture and you start looking for the observation that would prove you wrong. This is the alternative to our search for confirmatory instances. If you think the task is easy, you will be disappointed—few humans have a natural ability to do this. I confess that I am not one of them; it does not come naturally to me. *
Counting to Three
Cognitive scientists have studied our natural tendency to look only for corroboration; they call this vulnerability to the corroboration error the confirmation bias . There are some experiments showing that people focus only on the books read in Umberto Eco’s library. You can test a given rule either directly, by looking at instances where it works, or indirectly, by focusing on where it does not work. As we saw earlier, disconfirming instances are far more powerful in establishing truth. Yet we tend to not be aware of this property.
The first experiment I know of concerning this phenomenon was done by the psychologist P. C. Wason. He presented subjects with the three-number sequence 2, 4, 6, and asked them to try to guess the rule generating it. Their method of guessing was to produce other three-number sequences, to which the experimenter would respond “yes” or “no” depending on whether the new sequences were consistent with the rule. Once confident with their answers, the subjects would formulate the rule. (Note the similarity of this experiment to the discussion in Chapter 1 of the way history presents itself to us: assuming history is generated according to some logic, we see only the events, never the rules, but need to guess how it works.) The correct rule was “numbers in ascending order,” nothing more. Very few subjects discovered it because in order to do so they had to offer a series in descending order (that the experimenter would say “no” to). Wason noticed that the subjects had a rule in mind, but gave him examples aimed at confirming it instead of trying to supply series that were inconsistent with their hypothesis. Subjects tenaciously kept trying to confirm the rules that they had made up.
This experiment inspired a collection of similar tests, of which another example: Subjects were asked which questions to ask to find out whether a person was extroverted or not, purportedly for another type of experiment. It was established that subjects supplied mostly questions for which a “yes” answer would support the hypothesis.
But there are exceptions. Among them figure chess grand masters, who, it has been shown, actually do focus on where a speculative move might be weak; rookies, by comparison, look for confirmatory instances instead of falsifying ones. But don’t play chess to practice skepticism. Scientists believe that it is the search for their own weaknesses that makes them good chess players, not the practice of chess that turns them into skeptics. Similarly, the speculator George Soros, when making a financial bet, keeps looking for instances that would prove his initial theory wrong. This, perhaps, is true self-confidence: the ability to look at the world without the need to find signs that stroke one’s ego. *
Sadly, the notion of corroboration is rooted in our intellectual habits and discourse. Consider this comment by the writer and critic John Updike: “When Julian Jaynes … speculates that until late in the second millennium B.C. men had no consciousness but were automatically obeying the voices of gods, we are astounded but compelled to follow this remarkable thesis through all the corroborative evidence.” Jaynes’s thesis may be right, but, Mr. Updike, the central problem of knowledge (and the point of this chapter) is that there is no such animal as corroborative evidence.
Saw Another Red Mini!
The following point further illustrates the absurdity of confirmation. If you believe that witnessing an additional white swan will bring confirmation that there are no black swans, then you should also accept the statement, on purely logical grounds, that the sighting of a red Mini Cooper should confirm that there are no black swans .
Why? Just consider that the statement “all swans are white” is equivalent to “all nonwhite objects are not swans.” What confirms the latter statement should confirm the former. Therefore, a mind with a confirmation bent would infer that the sighting of a nonwhite object that is not a swan should bring such confirmation. This argument, known as Hempel’s raven paradox, was rediscovered by my friend the (thinking) mathematician Bruno Dupire during one of our intense meditating walks in London—one of those intense walk-discussions, intense to the point of our not noticing the rain. He pointed to a red Mini and shouted, “Look, Nassim, look! No Black Swan!”
Not Everything
We are not naïve enough to believe that someone will be immortal because we have never seen him die, or that someone is innocent of murder because we have never seen him kill. The problem of naïve generalization does not plague us everywhere. But such smart pockets of inductive skepticism tend to involve events that we have encountered in our natural environment, matters from which we have learned to avoid foolish generalization.
For instance, when children are presented with the picture of a single member of a group and are asked to guess the properties of other unseen members, they are capable of selecting which attributes to generalize. Show a child a photograph of someone overweight, tell her that he is a member of a tribe, and ask her to describe the rest of the population: she will (most likely) not jump to the conclusion that all the members of the tribe are weight-challenged. But she would respond differently to generalizations involving skin color. If you show her people of dark complexion and ask her to describe their co-tribesmen, she will assume that they too have dark skin.
So it seems that we are endowed with specific and elaborate inductive instincts showing us the way. Contrary to the opinion held by the great David Hume, and that of the British empiricist tradition, that belief arises from custom , as they assumed that we learn generalizations solely from experience and empirical observations, it was shown from studies of infant behavior that we come equipped with mental machinery that causes us to selectively generalize from experiences (i.e., to selectively acquire inductive learning in some domains but remain skeptical in others). By doing so, we are not learning from a mere thousand days, but benefiting, thanks to evolution, from the learning of our ancestors—which found its way into our biology.
Back to Mediocristan
And we may have learned things wrong from our ancestors. I speculate here that we probably inherited the instincts adequate for survival in the East African Great Lakes region where we presumably hail from, but these instincts are certainly not well adapted to the present, post-alphabet, intensely informational, and statistically complex environment.
Indeed our environment is a bit more complex than we (and our institutions) seem to realize. How? The modern world, being Extremistan, is dominated by rare—very rare—events. It can deliver a Black Swan after thousands and thousands of white ones, so we need to withhold judgment for longer than we are inclined to. As I said in Chapter 3, it is impossible—biologically impossible—to run into a human several hundred miles tall, so our intuitions rule these events out. But the sales of a book or the magnitude of social events do not follow such strictures. It takes a lot more than a thousand days to accept that a writer is ungifted, a market will not crash, a war will not happen, a project is hopeless, a country is “our ally,” a company will not go bust, a brokerage-house security analyst is not a charlatan, or a neighbor will not attack us. In the distant past, humans could make inferences far more accurately and quickly.
Читать дальше