After Secretary of the Air Force Quarles expressed concern about the safety of sealed-pit weapons, the Armed Forces Special Weapons Project began its own research on acceptable probabilities. The Army had assumed that the American people would regard a nuclear accident no differently from an act of God. An AFSWP study questioned the assumption, warning that the “psychological impact of a nuclear detonation might well be disastrous” and that “there will likely be a tendency to blame the ‘irresponsible’ military and scientists.” Moreover, the study pointed out that the safety of nuclear weapons already in the American stockpile had been measured solely by the risk of a technical malfunction. Human error had been excluded as a possible cause of accidents; it was thought too complex to quantify. The AFSWP study criticized that omission: “The unpredictable behavior of human beings is a grave problem when dealing with nuclear weapons.”
In 1957 the Armed Forces Special Weapons Project offered a new set of acceptable probabilities. For example, it proposed that the odds of a hydrogen bomb exploding accidentally — from all causes, while in storage, during the entire life of the weapon — should be one in ten million. And the lifespan of a typical weapon was assumed to be ten years. At first glance, those odds made the possibility of a nuclear disaster seem remote. But if the United States kept ten thousand hydrogen bombs in storage for ten years, the odds of an accidental detonation became much higher — one in a thousand. And if those weapons were removed from storage and loaded onto airplanes, the AFSWP study proposed some acceptable probabilities that the American public, had it been informed, might not have found so acceptable. The odds of a hydrogen bomb detonating by accident, every decade, would be one in five. And during that same period, the odds of an atomic bomb detonating by accident in the United States would be about 100 percent.
All of those probabilities, acceptable or unacceptable, were merely design goals. They were based on educated guesses, not hard evidence, especially when human behavior was involved. The one-point safety of a nuclear weapon seemed like a more straightforward issue. It would be determined by phenomena that were quantifiable: the velocity of high explosives, the mass and geometry of a nuclear core, the number of fissions that could occur during an asymmetrical implosion. But even those things were haunted by mathematical uncertainty. The one-point safety tests at Nevada Test Site had provided encouraging results, and yet the behavior of a nuclear weapon in an “abnormal environment” — like that of a fuel fire ignited by a plane crash — was still poorly understood. During a fire, the high explosives of a weapon might burn; they might detonate; or they might burn and then detonate. And different weapons might respond differently to the same fire, based on the type, weight, and configuration of their high explosives. For firefighting purposes, each weapon was assigned a “time factor” — the amount of time you had, once a weapon was engulfed in flames, either to put out the fire or to get at least a thousand feet away from it. The time factor for the Genie was three minutes.
Even if a weapon could be made fully one-point safe, it might still detonate by accident. A glitch in the electrical system could potentially arm a bomb and trigger all its detonators. Carl Carlson, a young physicist at Sandia, came to believe that the design of a nuclear weapon’s electrical system was the “real key” to preventing accidental detonations. The heat of a fire might start the thermal batteries, release high-voltage electricity into the X-unit, and then set off the bomb. To eliminate that risk, heat-sensitive fuses were added to every sealed-pit weapon. At a temperature of 300 degrees Fahrenheit, the fuses would blow, melting the connections between the batteries and the arming system. It was a straightforward, time-honored way to interrupt an electrical circuit, and it promised to ensure that a high temperature wouldn’t trigger the detonators. But Carlson was still worried that in other situations a firing signal could still be sent to a nuclear weapon by accident or by mistake.
A strong believer in systems analysis and the use of multiple disciplines to solve complex questions, Carlson thought that adding heat-sensitive fuses to nuclear weapons wasn’t enough. The real safety problem was more easily stated than solved: bombs were dumb. They responded to simple electrical inputs, and they had no means of knowing whether a signal had been sent deliberately. In the cockpit of a SAC bomber, the T-249 control box made it easy to arm a weapon. First you flicked a toggle switch to ON, allowing power to flow from the aircraft to the bomb. Then you turned a knob from the SAFE position either to GROUND or to AIR, setting the height at which the bomb would detonate. That was all it took — and if somebody forgot to return the knob to SAFE, the bomb would remain armed, even after the power switch was turned off. Writing on behalf of Sandia and the other weapon labs, Carlson warned that an overly simplistic electrical system increased the risk of a full-scale detonation during an accident: “a weapon which requires only the receipt of intelligence from the delivery system for arming will accept and respond to such intelligence whether the signals are intentional or not.”
The need for a nuclear weapon to be safe and the need for it to be reliable were often in conflict. A safety mechanism that made a bomb less likely to explode during an accident could also, during wartime, render it more likely to be a dud. The contradiction between these two design goals was succinctly expressed by the words “always/never.” Ideally, a nuclear weapon would always detonate when it was supposed to — and never detonate when it wasn’t supposed to. The Strategic Air Command wanted bombs that were safe and reliable. But most of all, it wanted bombs that worked. A willingness to take personal risks was deeply embedded in SAC’s institutional culture. Bomber crews risked their lives every time they flew a peacetime mission, and the emergency war plan missions for which they trained would be extremely dangerous. The crews would have to elude Soviet fighter planes and antiaircraft missiles en route to their targets, survive the blast effects and radiation after dropping their bombs, and then somehow find a friendly air base that hadn’t been destroyed. They would not be pleased, amid the chaos of thermonuclear warfare, to learn that the bombs they dropped didn’t detonate because of a safety device.
Civilian weapon designers, on the other hand, were bound to have a different perspective — to think about the peacetime risk of an accident and err on the side of never. Secretary of the Air Force Quarles understood the arguments on both sides. He worried constantly about the Soviet threat. And he had pushed the Atomic Energy Commission to find methods of achieving “a higher degree of nuclear safing.” But if compromises had to be made between always and never, he made clear which side would have to bend. “Such safing,” Quarles instructed, “should, of course, cause minimum interference with readiness and reliability.”
“Asuper long-distance intercontinental multistage ballistic rocket was launched a few days ago,” the Soviet Union announced during the last week of August 1957. The news didn’t come as a surprise to Pentagon officials, who’d secretly monitored the test flight with help from a radar station in Iran. But the announcement six weeks later that the Soviets had placed the first manmade satellite into orbit caught the United States off guard — and created a sense of panic among the American people. Sputnik 1 was a metallic sphere, about the size of a beach ball, that could do little more than circle the earth and transmit a radio signal of “beep-beep.” Nevertheless, it gave the Soviet Union a huge propaganda victory. It created the impression that “the first socialist society” had surpassed the United States in missile technology and scientific expertise. The successful launch of Sputnik 2, on November 3, 1957, seemed even more ominous. The new satellite weighed about half a ton; rocket engines with enough thrust to lift that sort of payload could be used to deliver a nuclear warhead. Sputnik 2 also carried the first animal to orbit the earth, a small dog named Laika — evidence that the Soviet Union was planning to put a man in space. Although the Soviets boasted that Laika lived for a week in orbit, wearing a little space suit, housed in a pressurized compartment with an ample supply of food and water, she actually died within a few hours of liftoff.
Читать дальше