A new science of cities is clearly in the making. In fact, it is perhaps the real promise of smart cities. Even if they fail to deliver efficiency, security, sociability, resilience, and transparency—the ambitions of all those stakeholders this book has covered—they will undoubtedly be incredible laboratories for studying how cities grow, adapt, and decline.
“It is of great urgency that we understand cities in a profound and predictive fashion,” West has said.59 His alarm is appropriate. But is it a psychohistorian’s dream to think we could compute with any certainty the behavior of something as complex as an entire city, and do it in a way that people can actually use it to solve problems? The field certainly has its work set out for it, and we’ve seen the many failed attempts to do so. “Data enthusiasm,” as Peter Hirshberg called it, rules the day and is fueling the new scientific interest in cities.60 But even the biggest urban datasets are likely to prove tantalizingly incomplete. As Batty told me during a 2010 interview, “A lot of the old questions which you’d think might be informed by new data are not.” When we spoke, he was poring over a new dataset of transactions from the London Underground’s Oyster payment-card system. The only problem, he pointed out, was that while some 6.2 million Londoners swiped into the system on an average weekday, only 5.4 million swiped out. Over eight hundred thousand people—nearly 13 percent—“leaked” each day through the sensor web, through exit gates left open during rush hour. “It’s as hard as it ever was to get transportation data that is useful,” he lamented, “You still need household surveys to actually find out where people are going.” A more sound urban science then, will have to ask questions that produce knowledge we can act on, as well as generate data that can seed new theories—it can’t just mine data exhaust. As Batty concluded, “There’s all this new stuff, but the old questions are still here and they’ve not been answered.”61
Slow Data
The big difference between the control revolution that occurred in the cities of the late nineteenth century and the one that’s happening now is that the problem then was a lack of communications and a lack of data. Our ability to manufacture and mobilize the physical world outstripped our abilities to communicate and coordinate. Today, the problem is the opposite: we have abundant data and instantaneous communication, and a growing ability not just to sense what is happening but to anticipate and predict what will happen in the future. The problem today isn’t figuring out how to accelerate the flow of people, materials, and goods, but rather to try to use less energy by slowing them down. Big data harvested from the exhaust of new sensor networks and everyday transactions promises to shed light on what makes cities tick, streamline their day-to-day management, and inform our long-term plans. But we cannot pretend that we have all the data we need, or that there is always inherent value in mining it. In 1967, as IBM’s sales of mainframe computers to corporations and governments were booming, William Bruce Cameron, an American sociologist, made a subtle but stunning observation about the nature of data and society. “It would be nice if all of the data which sociologists require could be enumerated,” he wrote, “because then we could run them through IBM machines and draw charts as the economists do. However, not everything that can be counted counts, and not everything that counts can be counted.”
For all of our big data, there is still a small universe of crucial bits missing. I think of them as “slow data.” Slow data isn’t just about plugging the gaps in our sensory infrastructure that prevent researchers like Batty from charting a complete empirical view of the city. It is a tool for unraveling the inevitable spiral of efficiency and consumption that our present conceptions of smart cities could unleash.
The fundamental pitch of technology giants’ smart cities is that we can have our cake and eat it too. We can accelerate the flow of information to reduce the flow of resources. But this thinking is flawed. Gains in efficiency often lead to “rebound” consumption. The initial effect of any widely adopted new technology that is more efficient at using a resource—say electricity—is to reduce the cost of that resource as demand falls. But by reducing the cost of a resource, we are spurred to consume more of it, often in other new applications for which it was previously too costly to use as an input. Urban planners have long been familiar with their own version of the rebound effect (or Jevons paradox as it is also known) in transportation planning. Building more roads never reduces traffic for long, but rather unleashes latent demand that was there all along. When congestion is reduced due to the new capacity, the opportunity cost of driving falls, spurring drivers who would never have ventured onto the previously clogged road to sally forth.
Over the coming decades, we’ll witness just such a process play out as automated vehicles take to the road. So far, the excitement over innovations like Google’s selfdriving car has been about safety and convenience. You’ll be able to surf the net during your commute. You’ll never have to worry about your drunken teenager wrapping the family sedan around a telephone pole. But the even greater economic potential of self-driving cars is that they could potentially double road capacity by reducing spacing between cars and jams caused by a whole host of idiosyncratic human behaviors. If that spurs people who would have stayed home to take new trips, we’ll have to double fuel economy just to hold even. Reducing overall emissions would require dramatic increases in efficiency to keep up with the expanding volume of traffic.
It shouldn’t surprise us to find these cycles of increasing consumption that lead nowhere. They are endemic to industrial capitalism. In The Jungle , Upton Sinclair’s reality drama about the harsh working conditions of the Chicago stockyards at the turn of the twentieth century, we learn about the process of “speeding-up the gang” used by slaughterhouse bosses to boost output. “There were portions of the work which determined the pace of the rest, and for these they had picked men whom they paid high wages, and whom they changed frequently. You might easily pick out these pacemakers, for they worked under the eye of the bosses, and they worked like men possessed.”63 In smart cities, technologies of automation take the place of the speed-up men. They may whisk away the consequences of consumption, and make us more efficient as individuals at the things we do now. But they do nothing to stack the deck for a lower-emissions civilization in the long run.
By automating conservation, designing it in, these smart cities don’t offer us any incentives to decide to cut back. That’s where slow data comes in. Slow data must be collected, sparingly and by design, not harvested opportunistically from data exhaust. Rather than hide the trade-offs between consumption and conservation, slow data makes it explicit. It makes us choose. And slow data leverages our humanness, by generating social interactions that help address these vexing problems.
As an example, take the problem of finding lost objects. The big data approach would be to tag and track everything, perhaps using RFID, the wireless barcode technology whose tiny plastic tags cost just a few cents apiece. They are already used in clothing stores, where they expedite checkout and reduce the cost of inventory and security. As an array of scanners roll out across the smart city, the Internet of Things will become searchable in real time. All it would take to find anything, anywhere, would be a piece of software to scan the logs—trillions of measurements which, if collected in one place, will be the biggest set of big data there is.
Читать дальше