The pervasiveness of bugs in smart cities is disconcerting. We don’t yet have a clear grasp of where the biggest risks lie, when and how they will cause systems to fail, or what the chain-reaction consequences will be. Who is responsible when a smart city crashes? And how will citizens help debug the city? Today, we routinely send anonymous bug reports to software companies when our desktop crashes. Is this a model that’s portable to the world of embedded and ubiquitous computing?
Counterintuitively, buggy smart cities might strengthen and increase pressure for democracy. Wade Roush, who studied the way citizens respond to large-scale technological disasters like blackouts and nuclear accidents, concluded that “control breakdowns in large technological systems have educated and radicalized many lay citizens, enabling them to challenge both existing technological plans and the expertise and authority of the people who carry them out.” This public reaction to disasters of our own making, he argues, has spurred the development of “a new cultural undercurrent of‘technological citizenship’ characterized by greater knowledge of, and skepticism toward, the complex systems that permeate modern societies.”14 If the first generation of smart cities does truly prove fatally flawed, from their ashes may grow the seeds of more resilient, democratic designs.
In a smart city filled with bugs, will our new heroes be the adventurous few who can dive into the ductwork and flush them out? Leaving the Broad Institute’s Blue Screen of Death behind, I headed back in the rain to my hotel, reminded of Brazil, the 1985 film by Monty Python troupe member Terry Gilliam, which foretold an autocratic smart city gone haywire. Arriving at my room, I opened my laptop and started up a Netflix stream of the film. As the scene opens, the protagonist, Sam
Lowry, squats sweating by an open refrigerator. Suddenly the phone rings, and Harry Tuttle, played by Robert De Niro, enters. “Are you from Central Services?” asks Lowry, referring to the uncaring bureaucracy that runs the city’s infrastructure. “They’re a little overworked these days,” Tuttle replies. “Luckily I intercepted your call.” Tuttle is a guerrilla repairman, a smart-city hacker valiantly trying to keep residents’ basic utilities up and running. “This whole system of yours could be on fire, and I couldn’t even turn on a kitchen tap without filling out a twenty-seven-B-stroke-six.”
Let’s hope that’s just a story. Some days, it doesn’t feel so far-fetched.
Brittle
Creation myths rely on faith as much as fact. The Internet’s is no different. Today, netizens everywhere believe that the Internet began as a military effort to design a communications network that could survive a nuclear attack.
The fable begins in the early 1960s with the publication of “On Distributed Communications” by Paul Baran, a researcher at the RAND think tank. At the time, Baran had been tasked with developing a scheme for an indestructible telecommunications network for the US Air Force. Cold War planners feared that the hub-and- spoke structure of the telephone system was vulnerable to a preemptive Soviet first strike. Without a working communications network, the United States would not be able to coordinate a counterattack, and the strategic balance of “mutually assured destruction” between the superpowers would be upset. What Baran proposed, according to Harvard University science historian Peter Galison, “was a plan to remove, completely, critical nodes from the telephone system.”15 In “On Distributed Communications” and a series of pamphlets that followed, he demonstrated mathematically how a less centralized latticework of network hubs, interconnected by redundant links, could sustain heavy damage without becoming split into isolated sections.16 The idea was picked up by the Pentagon’s Advanced Research Projects Agency (ARPA), a group set up to fast-track R&D after the embarrassment of the Soviet space program’s Sputnik launch in 1957. ARPANET, the Internet’s predecessor, was rolled out in the early 1970s.
So legend has it.
The real story is more prosaic. There were indeed real concerns about the survivability of military communications networks. But RAND was just one of several research groups that were broadly rethinking communications networks at the time—parallel efforts on distributed communications were being led by Lawrence Roberts at MIT and Donald Davies and Roger Scantlebury at the United Kingdom’s National Physical Laboratory. Each of the three efforts remained unaware of each other until a 1967 conference organized by the Association for Computing Machinery in Gatlinburg, Tennessee, where Roberts met Scantlebury, who by then had learned of Baran’s earlier work. And ARPANET wasn’t a military command network for America’s nuclear arsenal, or any arsenal for that matter. It wasn’t even classified. It was a research network. As Robert Taylor, who oversaw the ARPANET project for the Pentagon, explained in 2004 in a widely forwarded e-mail, “The creation of the ARPAnet was not motivated by considerations of war. The ARPAnet was created to enable folks with common interests to connect to one another through interactive computing even when widely separated by geography.”
We also like to think that the Internet is still widely distributed as Baran envisioned, when in fact it’s perhaps the most centralized communications network ever built. In the beginning, ARPANET did indeed hew closely to that distributed ideal. A 1977 map of the growing network shows at least four redundant transcontinental routes, run over phone lines leased from AT&T, linking up the major computing clusters in Boston, Washington, Silicon Valley, and Los Angeles. Metropolitan loops created redundancy within those regions as well.19 If the link to your neighbor went down, you could still reach them by sending packets around in the other direction. This approach is still commonly used today.
By 1987, the Pentagon was ready to pull the plug on what it had always considered an experiment. But the research community was hooked, so plans were made to hand over control to the National Science Foundation, which merged the civilian portion of the ARPANET with its own research network, NSFNET, launched a year earlier. In July 1988, NSFNET turned on a new national backbone network that dropped the redundant and distributed grid of ARPANET in favor of a more efficient and economical hub-and-spoke arrangement. Much like the air-transportation network today, consortia of universities pooled their resources to deploy their own regional feeder networks (often with significant NSF funding), which linked up into the backbone at several hubs scattered strategically around the country.
Just seven years later, in April 1995, the National Science Foundation handed over management of the backbone to the private sector. The move would lead to even greater centralization, by designating just four major interconnection points through which bits would flow across the country. Located outside San Francisco, Washington, Philadelphia, and Chicago, these hubs were the center not just of Americas Internet, but the worlds. At the time, an e-mail from Europe to Asia would almost certainly transit through Virginia and California. Since then, things have centralized even more. One of those hubs, in Ashburn, Virginia, is home to what is arguably the worlds largest concentration of data centers, some forty buildings boasting the collective footprint of twenty-two Walmart Supercenters. Elsewhere, Internet infrastructure has coalesced around preexisting hubs of commerce. Today, you could knock out a handful of buildings in Manhattan where the world's big network providers connect to each other—60 Hudson Street, 111 Eighth Avenue, 25 Broadway—and cut off a good chunk of transatlantic Internet capacity. (Fiber isn’t the first technology to link 25 Broadway to Europe. The elegant 1921 edifice served as headquarters and main ticket office for the great ocean-crossing steamships of the Cunard Line until the 1960s.)
Читать дальше