Despite the existence of many chokepoints, the Internet’s nuke-proof design creation myth has only been strengthened by the fact that the few times it has actually been bombed, it has proven surprisingly resilient. During the spring 1999 aerial bombardment of Serbia by NATO, which explicitly targeted telecommunications facilities along with the power grid, many of the country’s Internet Protocol networks were able to stay connected to the outside world. And the Internet survived 9/11 largely unscathed. Some 3 million telephone lines were knocked out in lower Manhattan alone—a grid the size of Switzerland’s—from damage to a single phone- company building near the World Trade Center. Broadcast radio and TV stations were crippled by the destruction of the north tower, whose rooftop bristled with antennas of every size, shape, and purpose. Panic-dialing across the nation brought the phone system to a standstill. But the Internet hardly blinked.
But while the Internet manages to maintain its messy integrity, the infrastructure of smart cities is far more brittle. As we layer ever more fragile networks and single points of failure on top of the Internet’s still-resilient core, major disruptions in service are likely to be common. And with an increasing array of critical economic, social, and government services running over these channels, the risks are compounded.
The greatest cause for concern is our growing dependence on untethered networks, which puts us at the mercy of a fragile last wireless hop between our devices and the tower. Cellular networks have none of the resilience of the Internet. They are the fainting ladies of the network world—when the heat is on, they’re the first to go down and make the biggest fuss as they do so.
Cellular networks fail in all kinds of ugly ways during crises: damage to towers (fifteen were destroyed around the World Trade Center on 9/11 alone), destruction of the “backhaul” fiber-optic line that links the tower into the grid (many more), and power loss (most towers have just four hours of battery backup). In 2012, flooding caused by Hurricane Sandy cut backhaul to over two thousand cell sites in eight counties in and around New York City and its upstate suburbs (not including New Jersey and Connecticut), and power to nearly fifteen hundred others.24 Hurricane Katrina downed over a thousand cell towers in Louisiana and Mississippi in August 2005, severely hindering relief efforts because the public phone network was the only common radio system among many responding government agencies. In the areas of Japan north of Tokyo annihilated by the 2011 tsunami, the widespread destruction of mobile-phone towers literally rolled the clock back on history, forcing people to resort to radios, newspapers, and even human messengers to communicate. “When cellphones went down, there was paralysis and panic,” the head of emergency communications in the city of Miyako told the New York Times.
The biggest threat to cellular networks in cities, however, is population density. Because wireless carriers try to maximize the profit-making potential of their expensive spectrum licenses, they typically only build out enough infrastructure to connect a fraction of their customers in a given place at the same time. “Oversubscribing,” as this carefully calibrated scheme is known in the business, works fine under normal conditions, when even the heaviest users rarely chat for more than a few hours a day. But during a disaster, when everyone starts to panic, call volumes surge and the capacity is quickly exhausted. On the morning of September 11, for instance, fewer than one in twenty mobile calls were connected in New York City. A decade later, little has changed. During a scary but not very destructive earthquake on the US East Coast in the summer of 2011, cell networks were again overwhelmed. Yet media reports barely noted it. Cellular outages during crises have become so commonplace in modern urban life that we no longer question why they happen or how the problem can be fixed.
Disruptions in public cloud-computing infrastructure highlight the vulnerabilities of dependence on network apps. Amazon Web Services, the eight-hundred- pound gorilla of public clouds that powers thousands of popular websites, experienced a major disruption in April 2011, lasting three days. According to a detailed report on the incident posted to the company’s website, the outage appears to have been a normal accident, to use Perrow’s term. A botched configuration change in the data center’s internal network, which had been intended to upgrade its capacity, shunted the entire facility’s traffic onto a lower-capacity backup network. Under the severe stress, “a previously unencountered bug” reared its head, preventing operators from restoring the system without risk of data loss. Later, in July 2012, a massive electrical storm cut power to the company’s Ashburn data center, shutting down two of the most popular Internet services—Netflix and Instagram. “Amazon Cloud Hit By Real Cloud,” quipped a PC World headline.29
The cloud is far less reliable than most of us realize, and its fallibility may be starting to take a real economic toll. Google, which prides itself on high-quality data- center engineering, suffered a half-dozen outages in 2008 lasting up to thirty hours. Amazon promises its cloud customers 99.5 percent annual uptime, while Google pledges 99.9 percent for its premium apps service. That sounds impressive until you realize that even after years of increasing outages, even in the most blackout-prone region (the Northeast), the much-maligned American electric power industry averages 99.96 percent uptime. Yet even that tiny gap between reality and perfection carries a huge cost. According to Massoud Amin of the University of Minnesota, power outages and power quality disturbances cost the US economy between $80 billion and $188 billion a year. A back-of-the-envelope calculation published by International Working Group on Cloud Computing Resiliency tagged the economic cost of cloud outages between 2007 and mid-2012 at just $70 million (not including the July 2012 Amazon outage).33 But as more and more of the vital functions of smart cities migrate to a handful of big, vulnerable data centers, this number is sure to swell in coming years.
Cloud-computing outages could turn smart cities into zombies. Biometric authentication, for instance, which senses our unique physical characteristics to identify individuals, will increasingly determine our rights and privileges as we move through the city—granting physical access to buildings and rooms, personalizing environments, and enabling digital services and content. But biometric authentication is a complex task that will demand access to remote data and computation. The keyless entry system at your office might send a scan of your retina to a remote data center to match against your personnel record before admitting you. Continuous authentication, a technique that uses always-on biometrics—your appearance, gestures, or typing style—will constantly verify your identity, potentially eliminating the need for passwords.34 Such systems will rely heavily on cloud computing, and will break down when it does. It’s one thing for your e-mail to go down for a few hours, but it’s another thing when everyone in your neighborhood gets locked out of their homes.
Another “cloud” literally floating in the sky above us, the Global Positioning System satellite network, is perhaps the greatest single point of failure for smart cities. Without it, many of the things on the Internet will struggle to ascertain where they are. America’s rivals have long worried about their dependence on the network of twenty-four satellites owned by the US Defense Department. But now even America’s closest allies worry that GPS might be cut off not by military fiat but by neglect. With a much-needed modernization program for the decades-old system way behind schedule, in 2009 the Government Accountability Office lambasted the Air Force for delays and cost overruns that threatened to interrupt service.33 And the stakes of a GPS outage are rising fast, as navigational intelligence permeates the industrial and consumer economy. In 2011 the United Kingdoms Royal Academy of Engineering concluded that “a surprising number of different systems already have GPS as a shared dependency, so a failure of the GPS signal could cause the simultaneous failure of many services that are probably expected to be independent of each other.”36 For instance, GPS is extensively used for tracking suspected criminals and land surveying. Disruptions in GPS service would require rapidly reintroducing older methods and technologies for these tasks. While alternatives such as Russia’s GLONASS already exist, and the European Union’s Galileo and China’s Compass systems will provide more alternatives in the future, the GPS seems likely to spawn its own nasty collection of normal accidents. “No-one has a complete picture,” concluded Martyn Thomas, the lead investigator on the UK study, “of the many ways in which we have become dependent on weak signals 12,000 miles above us.”
Читать дальше