Model Transparently
The most powerful information in the smart city is the code that controls it. Exposing the algorithms of smart-city software will be the most challenging task of all. They already govern many aspects of our lives, but we are hardly even aware of their existence.
As I explained in chapter 2, computer modeling of cities began in the 1960s. Michael Batty, the professor who runs one of the world’s leading centers for research in urban simulation at University College London, describes the era as “a milieu dominated by the sense that the early and mid-twentieth century successes in science could extend to the entire realm of human affairs.” Yet after those early failures and a long hibernation, Batty believes a renaissance in computer simulation of cities is upon us. The historical drought of data that starved so many models of the past has given way to a flood. Computing capacity is abundant and cheap. And like all kinds of software, the development of urban simulations is accelerating. “You can build models faster and quicker,” he says. “If they’re no good, you can throw them away much more rapidly than you ever could in the past.”24
The “most important attribute any model should have is transparency,” argued Douglass Lee, the planning scholar who marked the end of that first wave of modeling in a seminal 1973 article. Ironically, while open-source software—which thrives on transparency—is playing a major role in this renaissance in urban modeling research, most models outside the scholarly community today receive little scrutiny. The “many eyes” philosophy that ferrets out bugs in open source is nowhere to be found.
The tools that have governed the growth of cities—the instructions embodied in master plans, maps, and regulation—have long been considered a matter of public record. Models ought to be dissected and put on display in the same way, to invite scrutiny from many perspectives. But it would also serve to educate the public about their own city and the tools and methods used to understand and improve it. Imagine Patrick Geddes’s regional survey approach applied to a smart city. What a small leap it would be to turn Rio’s Intelligent Operations Center from mayor’s bunker into a living exhibition of the city, an Outlook Tower for the twenty-first century. Already, an onsite press room allows reporters to broadcast live views of the system in action. But more transparency should follow.
We shouldn’t expect the most important code of the smart city to see the light of day anytime soon. Industry will closely guard its intellectual property. Government agencies will as well, citing security and privacy concerns to mask anxieties about accountability and competence (much as they do with data today).
Citizens will need legal tools to seize the models directly. The Freedom of Information Act and other local sunshine statutes may offer tools for obtaining code or documentation. The impacts could be profound. Imagine how differently the inequitable closings of fire stations in 1960s New York might have played out if the deeply flawed assumptions of RANDs models had been scrutinized by watchdogs. At the time, there was one case in Boston where citizen opposition “eventually corrected the modeler’s assumptions” according to Lee.23 Today assumptions are being encoded into algorithms into an increasing array of decision-support tools that inform planners and public officials as they execute their duties. But the prospects for greater scrutiny may actually be shrinking instead. New York’s landmark 2012 open data law, the most comprehensive in the nation, explicitly exempts the city’s computer code from disclosure.
Greater transparency could also increase confidence in computer models with the group most prepared to put them to work solving problems—urban planners themselves. But the modeling renaissance that Batty sees isn’t driven by planners or even social scientists, but by physicists and computer scientists looking for extremely complex problems. As Batty told an audience at MIT in 2011, “Planners don’t use the models because they don’t believe they work.” In their eyes, the results of most models are too coarse to be useful. The models ignore political reality and the messy way groups make decisions. And while new software and abundant data are lowering the cost of creating and feeding city simulations, they are still fantastically expensive undertakings, just as Douglass Lee noted forty years ago.
Without addressing the trust issue through transparency, cybernetics may never again get its foot in the front door of city hall. As journalist David Weinberger has written, “sophisticated models derived computationally from big data—and consequently tuned by feeding results back in—might produce reliable results from processes too complex for the human brain. We would have knowledge but no understanding.” Such models will be scientific curios, but irrelevant to the professionals who plan our cities and the public officials that govern them. Worse, if they are kept under lock and key, they may be held in contempt by citizens who can never hope to understand the software that secretly controls their lives.
The benefits of transparency go beyond just unveiling the gear works of the smart city, challenging invalid or unjust assumptions and debugging code. The process of examination itself can be a constructive part of the city planning process, as we saw with IBM’s foray into system modeling in Portland. “A transparent model is still about as likely to be wrong, but at least concerned persons can investigate the points at which they disagree,” wrote Lee. “By achieving a consensus on assumptions, opposing parties may find they actually agree on [the model’s] conclusions.”
And the process of modeling, if done openly and collaboratively, can create new alliances for progressive change. As IBM’s Justin Cook, who led the development of the system model for Portland in 2011, explains, “you start to see that there’s natural constituencies that have not identified each other... that the people that care a lot about obesity and the people that care a lot about carbon have something in common.
Fail Gracefully
In Mirror Worlds , computer scientist David Gelernter compared the modern corporation to a fly-by-wire fighter aircraft: “It’s so fantastically advanced that you can’t fly it. It is aerodynamically unstable. It needs to have its ‘flight surfaces’ adjusted by computer every few thousandths of a second or it will bop off on its own, out of control. Modern organizations are in many cases close to the same level of attainment—except that, when they’re out of control, they don’t crash in flames; they shamble on blindly forever.” Engineers would rather describe this state of affairs as “graceful failure.” Instead of completely collapsing, the company (or a smart city) simply lumbers on at a lower level of performance. Compared to a crash, this is actually a pretty good outcome, assuming it eventually stages a full recovery.
We know that smart cities will have bugs. Even when a botched software update brings down an entire subway system, the problem can be fixed, usually quickly. But what happens during a crisis? How will the delicately engineered balance of material and information flows in smart cities, optimized for normal peacetime operation, perform under the severe, sustained stress of a disaster or war? As we saw in chapter 9, these systems routinely break down catastrophically during such events. How can we harden smart cities and ensure that when parts of them fail, they do so in controllable ways, and that vital public services can continue to operate even if they are cut off?
Big technology companies are staring to understand the need for building resilience into smart-city infrastructure. According to IBM’s Colin Harrison, “because of the complexity of these systems, if you start to overload them, they may fail. But if they fail, you’d like them to fail in a soft way, so that the operation continues, the lights don’t go out, and the water doesn’t stop flowing. It might not be as pressurized as you’d like it to be, but at least there will still be water.” It’s an extension of what systems engineers call “dependable computing,” a thirty-year-old set of techniques that will increasingly be applied to urban infrastructure. At the very least, like the robots in Isaac Asimov’s science fiction stories whose code of conduct prevents them from hurting humans, “it protects itself against doing harm to the infrastructure it’s trying to control,” imagines Harrison.31
Читать дальше