Each country’s salience score was assigned a one-in-four chance of randomly changing each year. That seemed high enough to me to capture the pace at which a government’s attention might move markedly in one direction or another and not so high as to introduce more volatility than was likely within countries or across countries over relatively short intervals. Naturally, this could have been done with a higher or lower probability, so there is nothing more than a personal judgment behind the choice of a one-in-four chance of a “shock.”
Any changes in salience reflected hypothetical shifts in the degree to which security concerns dominated policy formation or the degree to which other issues, such as domestic matters, surfaced to shape decision making for this or that country. Thus, the salience data were “shocked” to capture the range and magnitude of possible political “earthquakes” that could have arisen after 1948. This was the innovation to my model that resulted from the combination of my visit to Ohio and my failed predictions regarding health care. Since then, I have incorporated ways to randomly alter not only salience but also the indicators for potential clout and for positions, and even for whether a stakeholder stays in the game or drops out, in a new model I am developing.
Neither the alliance-portfolio data used to measure the degree of shared foreign interests nor the influence data were updated to take real events after 1948 into account. The alliance-portfolio measure only changed in response to the model’s logic and its dynamics, given randomly shocked salience. Changes in the alliance correlations for all of the countries were the indicator of whether the Soviets or the Americans would prevail or whether they would remain locked in an ongoing struggle for supremacy in the world.
So here was an analysis designed to predict the unpredictable—that is, the ebb and flow of attentiveness to security policy as the premier issue in the politics of each state in my study. With enough repetitions (at the time, I did just a hundred, because computation took a very long time; today I would probably do a thousand or more) with randomly distributed shocks, we should have been able to see the range of possible developments on the security front. That, in turn, should have made it possible to predict the relative likelihood of three possible evolutions of the cold war: (a) it would end with a clear victory by the United States within the fifty-year period I simulated; (b) it would end with a clear victory by the Soviet Union in that same time period; or (c) it would continue, with neither the Soviet Union nor the United States in a position to declare victory.
What did I find? The model indicated that in 78 percent of the scenarios in which salience scores were randomly shocked, the United States won the cold war peacefully, sometimes by the early to mid-1950s, more often in periods corresponding to the late 1980s or early 1990s. In 11 percent of the simulations, the Soviets won the cold war, and in the remaining 11 percent, the cold war persisted beyond the time frame covered by my investigation. What I found, in short, was that the configuration of policy interests in 1948 already presaged an American victory over the Soviet Union. It was, as Gaddis put it, an emergent property. This was true even though the starting date, 1948, predated the formation of either NATO or the Warsaw Pact, each of which emerged in almost every simulation as the nations’ positions shifted from round to round according to the model’s logic. 4
The selection of 1948 as the starting date was particularly challenging in that this was a time when there was concern that many countries in Western Europe would become socialist. This was a time, too, when many thought that a victory of communism over capitalism and authoritarianism over democracy was a historical inevitability. On the engineering front it was, of course, too late to change the course of events. Still, the model was quite provocative on this dimension, as it suggested opportunities that were passed up to win the cold war earlier. One of those opportunities, at the time of Stalin’s death (which, of course, was not a piece of information incorporated into the data that went into the model), was, as it turns out, contemplated by real decision makers at the time. They thought there might be a chance to wrest the Soviet Union’s Eastern European allies into the Western European fold. My model agreed. American decision makers did not pursue this possibility, because they feared it would lead to a war with the Soviet Union. My model disagreed, predicting that the Soviets in this period would be too preoccupied with domestic issues and would, undoubtedly with much regret, watch more or less helplessly as their Eastern European empire drifted away. We will, of course, never know who was right. We do know that that is what they did a few decades later, between 1989 and 1991.
So with the help of Dan Rostenkowski and John Gaddis’s students I was able to show how strongly the odds favored an American cold war victory. The account of the cold war, like the earlier examination of fraud, reminds us that prediction can look backward almost as fruitfully as it can look forward. Not everyone was as generous as John Gaddis in acknowledging that game-theory modeling might help sort out important issues, and not everyone should be (not that it isn’t nice when people are that generous). There should be and always will be critics.
There are plenty of good reasons for rejecting modeling efforts, or at least being skeptical of them, and plenty of bad reasons too. Along with technical failures within my models, or any models for that matter, there is the obvious limitation in that they are simply models, which are, of course, not reality. They are a simplified glance at reality. They can only be evaluated by a careful examination of what general propositions follow from their logic and an evaluation of how well reality corresponds with those propositions. Unfortunately, sometimes people look at lots of equations and think, “Real people cannot possibly make these complicated calculations, so obviously real people do not think this way.” I hear this argument just about every semester in one or another course that I teach. I always respond by saying that the opposite is true. Real people may not be able to do the cumbersome math that goes into a model, but that doesn’t mean they aren’t making much more complicated calculations in their heads even if they don’t know how to represent their analytic thought processes mathematically.
Try showing a tennis pro the equations that represent hitting a ball with topspin to the far corner of the opponent’s side of the court, making sure that the ball lands just barely inside the line and that it travels, say, at 90 miles an hour. Surely the tennis pro will look at the equations in utter bewilderment. Yet professional tennis players act as if they make these very calculations whenever they try to make the shot I just described. If the pro is a ranked player, then most of the time the shot is made successfully even though the decisions about arm speed, foot position, angle of the racket’s head, and so forth must be made in a fraction of a second and must be made while also working out the velocity, angle, and spin of the ball coming his or her way from across the court.
Since models are simplified representations of reality, they always have room for improvement. There is always a trade-off between adding complexity and keeping things manageable. Adding complexity is only warranted when the improvement in accuracy and reliability is greater than the cost of adding assumptions. This is, of course, the well-known principle of parsimony. I’ve made small and big improvements in my game-theory modeling over the years. My original forecasting model was static. It reported what would happen in one exchange of information on an issue. As such, it was a good forecaster but not much good at engineering. While I was tweaking that static model to improve its estimation of people’s willingness to take risks and to estimate their probability of prevailing or losing in head-to-head contests, I was also thinking about how to make the process dynamic. Real people, after all, are dynamic. They change their minds, they switch positions on questions, they make deals, and, of course, they bluff and renege on promises.
Читать дальше