And I also like the Monte Carlo run as a metaphor for living with nuclear weapons. Every day is a new Monte Carlo run. So far we’ve been lucky but what are the chances when we roll the dice today we won’t be any longer?
When I first heard the term “Monte Caro run,” I had conjured up a misinterpretation of the significance of “Monte Carlo.” I had thought first of the road race through the streets of the town, not the casinos. I had an image of all of us in the nuclear age, like a jaywalking pedestrian blithely crossing the road as the burning speedsters whiz by on the macadam of Monte Carlo’s roads. We act like there’s no chance of getting run over. We act like there’s little chance of a crisis getting out of hand. It’s a little like the old joke about the guy who jumped from the top of the Empire State Building. As he passes the 64th floor someone calls out “How does it feel?” And the guy whizzing by says, “Okay so far.”
In terms of avoiding a nuclear war we’ve been okay so far for sixty-five years. Are the odds of that continuing calculable? Statistics can perhaps throw some light on how many daily Monte Carlo runs of deterrence we have left before we get smashed.
Trying to decide what to make of “100 Nuclear Wars” led me into the arcane field of statistical nuclear war prediction, to look at other statistical approaches to nuclear risk. It’s another way of asking the “how close are we” question. Statistics are sometimes a field where the mechanics and morals of nuclear war converge.
For instance, if there were a negligible chance that nuclear deterrence would ever fail, a negligible chance that we’d ever be faced with the genocidal choice, it wouldn’t make much of moral difference if we were indifferent to the existence of that negligible chance. Would it permit us to threaten genocide to prevent genocide if we could be almost sure nothing could go wrong. Threaten evil to prevent evil. A negligible chance of non-negligible moral consequences to trouble the soul. But how sure is “almost sure”?
In fact, we would have to cede the moral high ground to those who say that—at virtually negligible risk—nuclear deterrence has saved the lives of as many as 60 million people who would otherwise have died in the conventional wars supposedly deterred by rules. (Recall: pre-Hiroshima twentieth-century war deaths; 100 million. Post-Nagasaki; 40 million.) But how do we calculate the risk now? Is it something we can live with or something we will die of? How do we know we’re not merely passing the 64th floor on the way down?
It is on this issue I found the work of the statistician Dr. Martin Hellman of Stanford provocative. Hellman set out in 2008 to research something akin to what Yarynich sought: nuclear war risk quantification. He sought to quantify to an order of magnitude (usually meaning a power of ten) the possibility or probability of a nuclear war occurring—and how soon it is likely—based on what we’ve learned in the past half-century from nuclear crises.
Hellman is an interesting fellow. As an electrical engineering professor specializing in cryptography and information theory at Stanford, after stints at MIT and IBM’s advanced research facility, he became famous in tech circles for being one of the inventors of the “public-key” and “trap-door” methods of encryption, which made (mostly) secure Internet and Web communication possible. [187]
But for the last quarter-century he’s been pursuing the question of “the inevitability and prevention of nuclear war” as he called it in an award-winning peer-reviewed 1985 paper. He constructed his mathematical model around the sharpened and limited question: what is the risk nuclear deterrence will fail to deter attack? It is a limited question in the sense that it excludes the very real chance that nuclear war will break out by accident. [188]
He compares his work to estimating the failure rate of a nuclear reactor design that has not yet failed: “In addition to estimating the failure rate, such a study also identifies the most likely event sequences that result in a catastrophic failure. Such a failure is composed of a cascade of small failures and reasonable numbers are often available for many of the variables (e.g., the failure rate of a cooling pump in a reactor).”
He doesn’t claim to have solved the problem of catastrophe estimation beyond a reasonable doubt, but his study of failure rate bears a relationship to the Monte Carlo runs in estimating the probability of success of a nuclear attack. Here’s how he does it.
He starts with a baseline unavoidable but acceptable risk. He defines acceptably small risk as the chance of extinction by an asteroid. “Such NEO (near earth object) extinction events [such as the one that probably created a fatal global winter for the dinosaurs] have a failure rate on the order of 10 to the minus 8th power per year,” astronomers tell us, where “failure rate” means number of expected hits not misses. That’s very small. Ten to the minus 8th power is one in 100,000,000 at a given instant. But what about over prolonged time?
“During the next century that failure rate,” Hellman wrote in an Engineering Honors Society professional journal in 2008, “corresponds to one chance in a million of humanity being destroyed.” [189]
What does that mean? “While ten to the minus 8th power [nuclear war as an asteroid-like danger] is a small probability, the associated cost is so high—infinite from our perspective—that some might argue that a century is too long a delay before working to reduce the threat.”
And while no serious efforts have been made to plan for an NEO event—other than a Michael Bay movie about astronauts landing on the asteroid and blowing it up with, you guessed it, nuclear weapons—at those odds we don’t really have to hurry. We can wait a decade until the tech gets better space lasers, like those in a James Cameron movie maybe.
He goes down the orders of magnitude: “If the failure rate is ten to the minus fifth power… then it’s difficult to tolerate even a decade’s delay… If the failure rate is 10 to the minus fourth the probability of humanity destroying itself during a decade long effort [to prevent it] would be one in a thousand which is much too large. If the failure rate is 10 to the minus 3 power per year the probability increases to approximately one percent over a decade and ten percent over a century and delay is clearly unacceptable.” By “delay” he means a delay in nuclear abolition.
If the failure rate is closer than that, “anything short of an all-out effort to change course would be criminally negligent: Each year we delay in reducing the risk brings with it a one percent chance of disaster and a decade’s delay entails roughly a ten percent chance.”
So how desperate a situation are we in?
In attempting to quantify the risk, Hellman takes a conservative approach, not only excluding accidents, but excluding many well-known near-misses, including the Berlin crisis of 1961, Nixon’s newly disclosed nuclear alert of 1969, the shadowy but well-known nuclear threats during the 1973 Yom Kippur War, the Able Archer crisis of 1983, the Norwegian rocket mistake, the assorted “flock of geese” dual phenomenology failures, the training tapes incident Zbigniev Brzezinski relates, and the chilling episode in which Colonel Petrov saved the day. Nor does he include the possibility of a nuclear war that starts regionally at first and escalates to a global one.
Using what statisticians call the “time invariant modeling” procedure, Hellman factors in only one full-blown nuclear crisis in the first fifty years of deterrence—the Cuban Missile Crisis—and two other potential crises: President Reagan’s threat to reimpose a naval blockade of Cuba in the 1980s and the planned deployment of an American missile defense system in Eastern Europe. At the end of his study, he concludes, even with this narrow list of crises, that “the projected failure rate of deterrence from all sources is on the order of one percent per year and even the lower level is well above the level that any engineering design review would find acceptable.” [190]Here, for the record, is the equation he uses to arrive at his disturbing conclusion.
Читать дальше