Unlike the hot line frequently depicted in Hollywood films, the new system didn’t provide a special telephone for the president to use in an emergency. It relied on Teletype machines that could send text quickly and securely. Written statements were considered easier to translate, more deliberate, and less subject to misinterpretation than verbal ones. Every day, a test message was sent once an hour, alternately from Moscow, in Russian, and from Washington, in English. The system would not survive nuclear attacks on either city. But it was installed with the hope of preventing them.
• • •
DURING THE CUBAN MISSILE CRISIS, the Strategic Air Command conducted 2,088 airborne alert missions, involving almost fifty thousand hours of flying time, without a single accident. The standard operating procedures, the relentless training, and the checklists introduced by LeMay and Power helped to achieve a remarkable safety record when it was needed most. Nevertheless, in the aftermath of the crisis, public anxieties about nuclear war soon focused on the dangers of SAC’s airborne alert. The great risk — as depicted in the 1964 films Fail-Safe and Dr. Strangelove — wasn’t that a hydrogen bomb might accidentally explode during the crash of a B-52. It was that an order to attack the Soviet Union could be sent without the president’s authorization, either through a mechanical glitch ( Fail-Safe ) or the scheming of a madman ( Dr. Strangelove ).
The plot of both films strongly resembled that of the novel Red Alert . Its author, Peter George, cowrote the screenplay of Dr. Strangelove and sued the producers of Fail-Safe for copyright infringement. The case was settled out of court. The threat of accidental nuclear war was the central theme of the films — and Strangelove , although a black comedy, was by far the more authentic of the two. It astutely parodied the strategic theories pushed by RAND analysts, members of the Kennedy administration, and the Joint Chiefs. It captured the absurdity of debating how many million civilian deaths would constitute a military victory. And it ended with an apocalyptic metaphor for the arms race, conjuring a Soviet doomsday machine that’s supposed to deter an American attack by threatening to launch a nuclear retaliation, automatically, through the guidance of a computer, without need of any human oversight. The failure of the Soviets to tell the United States about the contraption defeats its purpose, inadvertently bringing the end of the world. “The whole point of the doomsday machine is lost,” Dr. Strangelove, the president’s eccentric science adviser, explains to the Soviet ambassador, “IF YOU KEEP IT A SECRET!”
The growing public anxiety about accidental war prompted a spirited defense of America’s command-and-control system. Sidney Hook, a prominent conservative intellectual, wrote a short book dismissing the fears spread by Cold War fiction. “The probability of a mechanical failure in the defense system,” Hook wrote in The Fail-Safe Fallacy , “is now being held at so low a level that no accurate quantitative estimate of the probability… can be made.” Senator Paul H. Douglas, a Democrat from Illinois praised the book and condemned the misconception that America’s nuclear deterrent was a grave danger to mankind, not “the Communist determination to dominate the world.” And Roswell L. Gilpatric, one of McNamara’s closest advisers, assured readers of the New York Times that any malfunction in the command-and-control system would make it “‘fail safe,’ not unsafe.” Gilpatric also suggested that permissive action links would thwart the sort of unauthorized attack depicted in Dr. Strangelove .
In fact, there was nothing to stop the crew of a B-52 from dropping its hydrogen bombs on Moscow — except, perhaps, Soviet air defenses. The Go code was simply an order from SAC headquarters to launch an attack; bombers on airborne alert didn’t have any technological means to stop a renegade crew. General Power had waged a successful bureaucratic battle against the installation of permissive action links in SAC’s weapons. All of its bombs and warheads were still unlocked, as were those of the Navy. The effort to prevent the unauthorized use of nuclear weapons remained largely administrative. In 1962, SAC had created a Human Reliability Program to screen airmen and officers for psychological problems, drug use, and alcohol abuse. And a version of the two-man rule was introduced in its bombers. A second arming switch was added to the cockpit. In order to use a nuclear weapon, both the ready/safe switch and the new “war/peace switch” had to be activated by two different crew members. Despite these measures, an unauthorized attack on the Soviet Union was still possible. But the discipline, training, and esprit de corps of SAC’s bomber crews made it unlikely.
As a plot device in novels and films, an airborne alert gone wrong could provide suspense. A stray bomber would need at least an hour to reach its target, enough time to tell a good story. But one of the real advantages of SAC’s bombers was that their crews could be contacted by radio and told to abort their missions, if the Go code had somehow been sent by mistake. Ballistic missiles posed a far greater risk of unauthorized or accidental use. Once they were launched, there was no calling them back. Missiles being flight-tested usually had a command destruct mechanism — explosives attached to the airframe that could be set off by remote control, destroying the missile if it flew off course. SAC refused to add that capability to operational missiles, out of a concern that the Soviets might find a way to detonate them all, midflight. And for similar reasons, SAC opposed any system that required a code to enable the launch of Minuteman missiles. “The very existence of the lock capability,” General Power argued, “would create a fail-disable potential for knowledgeable agents to ‘dud’ the entire Minuteman force.”
After examining the launch procedures proposed for the Minuteman, John H. Rubel — who supervised strategic weapon research and development at the Pentagon — didn’t worry about the missiles being duds. He worried about an entire squadron of them being launched by a pair of rogue officers. A Minuteman squadron consisted of fifty missiles, overseen by five crews housed underground at separate locations. Only two of the crews were necessary to launch the missiles — making it more difficult for the Soviet Union to disable a squadron by attacking its control centers. When both of the officers in two different centers turned their keys and “voted” for a launch, all of the squadron’s missiles would lift off. There was no way to fire just a few of them: it was all or nothing. And a launch order couldn’t be rescinded. After the keys were turned, fifty missiles would leave their silos, either simultaneously or in a “ripple order,” one after another.
By requiring a launch vote from at least two crews, SAC hoped to prevent the launch of Minuteman missiles without proper authorization. But Rubel was surprised to learn that SAC had also installed a timer in every Minuteman control center. The timer had been added as a backup — an automated vote to launch — in case four of the five crews were killed during a surprise attack. When the officers in a control center turned their launch keys, the timer started. And when the timer ran out, if no message had been received from the other control centers, approving or opposing the order to launch, all the missiles lifted off. The problem with the timer, Rubel soon realized, was that a crew could set it to six hours, six minutes — or zero. In the wrong hands, it gave a couple of SAC officers the ability to wipe out fifty cities in the Soviet Union. An unauthorized attack on that scale, a classified history of the Minuteman program noted, would be “an accident for which a later apology might be inadequate.”
Читать дальше