Let me give an example to help flesh out how easily we can slip into poor logical thinking. Baseball is beset by a scandal over performance-enhancing drugs. Suppose you know that the odds someone will test positive for steroids are 90 percent if they actually used steroids. Does that mean when someone tests positive we can be very confident that they used steroids? Journalists seem to think so. Congress seems to think so. But it just isn’t so. To formulate a policy we need an answer to the question, How likely is it that someone used steroids if they test positive? It is not enough to know how likely they are to test positive if they use steroids. Unfortunately, we cannot easily know the answer to the question we really care about. We can know whether someone tested positive, but that could be a terrible basis for deciding whether the person cheated. A logically consistent use of probabilities—working out the real risks—can help make that clear.
Imagine that out of every 100 baseball players (only) 10 cheat by taking steroids (game theory notwithstanding, I am an optimist) and that the tests are accurate enough that 9 out of every 10 cheaters test positive. To evaluate the likelihood of guilt or innocence we still need to know how many honest players test positive—that is, how often the tests give us a false positive answer. Tests are, after all, far from perfect. Just imagine that while 90 out of every 100 players do not cheat, 10 percent of the honest players nevertheless test (falsely) positive. Looking at these numbers it’s easy to think, well, hardly anyone gets a false positive (only 10 percent of the innocent) and almost every guilty party gets a true positive (90 percent of the guilty), so knowing whether a person tested positive must make us very confident of their guilt. Wrong! 8
With the numbers just laid out, 9 out of 10 cheaters test positive and 9 out of 90 innocent ball players also test positive. So, 9 out of 18 of the positive test results include cheaters and 9 out of 18 include absolutely innocent baseball players. In this example, the odds that a player testing positive actually uses steroids are fifty-fifty, just the flip of a coin. That is hardly enough to ruin a person’s career and reputation. Who would want to convict so many innocents just to get the guilty few? It is best to take seriously the dictum “innocent until proven guilty.”
The calculation we just did is an example of Bayes’ Theorem. 9It provides a logically sound way to avoid inconsistencies between what we thought was true (a positive test means a player uses steroids) and new information that comes our way (half of all players testing positive do not use steroids). Bayes’ Theorem compels us to ask probing questions about what we observe. Instead of asking, “What are the odds that a baseball player uses performance-enhancing drugs?” we ask, “What are the odds that a baseball player uses performance-enhancing drugs given that we know he tested positive for such drugs and we know the odds of testing positive under different conditions?”
Bayes’ Theorem provides a way to calculate how people digest new information. It assumes that everyone uses such information to check whether what they believe is consistent with their new knowledge. It highlights how our beliefs change—how they are updated, in game-theory jargon—in response to new information that reinforces or contradicts what we thought was true. In that way, the theorem, and the game theorists who rely on it, view beliefs as malleable rather than as unalterable biases lurking in a person’s head.
This idea of updating beliefs leads us to the next challenge. Suppose a baseball player who had a positive (guilty) test result is called to testify before Congress in the steroid scandal. Now suppose he knows of the odds sketched above. Aware of these statistics, and knowing that any self-respecting congressperson is also aware of them, the baseball player knows that Congress, if citing only a positive test result as their evidence, in fact has little on him, no matter how much outrage they muster. The player, in other words, knows Congress is bluffing. But of course Congress knows this as well, so they have subpoenaed the player’s trainer, who is coming in to testify right after the player. Is this just another bluff by Congress, tightening the screws to elicit a confession with the threat of perjury looming? Whether the player is guilty or not, perhaps he shrugs off the move, in effect calling Congress’s raising of the stakes. Now what? Does Congress actually have anything, or will they be embarrassed for going on a fishing expedition and dragging an apparently innocent man through the mud? Will the player adamantly profess innocence knowing he’s guilty (but maybe he really isn’t), and should we shrug off the declarations of innocence lightly, as it seems so many of us do? Is Congress bluffing? Is the player bluffing? Is everyone bluffing? These are tough problems, and they are right up game theory’s alley!
In real life there are plenty of incentives for others (and for us) to lie. That is certainly true for athletes, corporate executives, national leaders, poker players, and all the rest of us. Therefore, to predict the future we have to reflect on when people are likely to lie and when they are most likely to tell the truth. In engineering the future, our task is to find the right incentives so that people tell the truth, or so that, when it helps our cause, they believe our lies.
One way of eliciting honest responses is to make repeated lying really costly. Bluffing at poker, for instance, can be costly exactly because other players sometimes don’t believe big bets, and don’t fold as a result. If their hand is better, the bluff comes straight out of the liar’s pocket. So the central feature of a game like five-card draw is not figuring out the probability of drawing an inside straight or three of a kind, although that’s certainly useful too. It’s about convincing others that your hand is stronger than it really is. Part of the key to accumulating bargaining chips, whether in poker or diplomacy, is engineering the future by exploiting leverage that really does not exist. Along with taking prudent risks, creating leverage is one of the most important features in changing outcomes. Of course, that is just a polite way of saying that it’s good to know when and how to lie.
Betting, whether with chips, stockholders’ money, perjury charges, or soldiers, can lead others to wrong inferences that benefit the bettor; but gambling always suffers from two limitations. First, it can be expensive to bet more than a hand is worth. Second, everyone has an interest in trying to figure out who is bluffing and who is being honest. Raising the stakes helps flush out the bluffers. The bigger the cumulative bet, the costlier it is to pretend to have the resolve to see a dispute through when the cards really are lousy. How much pain anyone is willing to risk on a bluff, and how similar their wagering is when they are bluffing and when they are really holding good cards, is crucial to the prospects of winning or of being unmasked. That, of course, is why diplomats, lawyers, and poker players need a good poker face, and it is why, for example, you take your broker’s advice more seriously if she invests a lot of her own money in a stock she’s recommending.
Getting the best results comes down to matching actions to beliefs. Gradually, under the right circumstances, exploiting information leads to consistency between what people see, what they think, and what they do, just as it does in Mastermind. Convergence in thinking facilitates deals, bargains, and the resolution of disputes.
With that, we’ve just completed the introductory course in game theory. Nicely done! Now we’re ready to go on to the more advanced course. In the next chapter we look in more depth at how the very fact of our being strategic changes everything going on around us. That will set the stage for working out how we can use strategy to change things to be better for ourselves and those we care about and, if we are altruistic enough, maybe even for almost everyone.
Читать дальше