The origins and the economic importance of the Internet are part of a much larger debate about the nature of technological innovation and economic growth. The industrial revolution reshaped the material basis of society, introducing technologies and products we still use today. But there are widely differing views on just how that happened. Pessimists like economist Tyler Cowen believe that a handful of breakthrough innovations drove America’s economic engine over the last one hundred years. He sees the decline of productivity growth, the pace of improvement in output per unit of input (labor, capital, machinery), in the US economy as a sign that we have finally exhausted the stockpile of the breakthroughs of the late nineteenth and early twentieth century.
He writes: “Today... apart from the seemingly magical internet, life in broad material terms isn’t so different from what it was in 1953. We still drive cars, use refrigerators, and turn on the light switch, even if dimmers are more common these days. The wonders portrayed in The Jetsons, the space-age television cartoon from the 1960s, have not come to pass.... Life is better and we have more stuff, but the pace of change has slowed down compared to what people saw two or three generations ago.” Not only does Cowen argue that big breakthroughs are the true source of technological progress, he doesn’t see anything new in the pipeline of the same magnitude. The result, he concludes, is an inevitable “great stagnation.”36
Where Cowen sees scarcity, Google’s chief economist Hal Varian sees abundance. For Varian, the big breakthroughs of the industrial revolution happened only after, and only because of, a new substrate of interoperable technological components that were invented first. In a 2008 interview, he described this process of “combinatorial innovation”: “if you look historically, you’ll find periods in history where there would be the availability of... different component parts that innovators could combine or recombine to create new inventions. In the 1800s, it was interchangeable parts. In 1920, it was electronics. In the 1970s, it was integrated circuits. Now what we see is a period where you have Internet components, where you have software, protocols, languages, and capabilities to combine these component parts in ways that create totally new innovations.”
Focusing on the inputs to technology innovation instead of the outputs tells a very different story of how earlier breakthroughs came about, the technological and economic significance of the Internet, and the prospects for a new age of innovation in our own future. For Cowen, the Web (and ubiquitous computing presumably, though he doesn’t seem to be aware of it) are merely the last sputters of a technological revolution that began over a century ago. But for Varian, they form the seedbed for potentially rapid, transformative creation via a million tiny steps.
The Internet is a case in point, contrasting these two views on the nature of technological innovation. In the 1970s, telecommunications companies and academic computer scientists battled over the design of the future Internet. Industry engineers backed X.25, a complex scheme for routing data across computer networks. The computer scientists favored a simpler, collaborative, ad hoc approach. As Joi Ito, director of the MIT Media Lab, describes it:
The battle between X.25 and the Internet was the battle between heavily funded, government backed experts and a loosely organized group of researchers and entrepreneurs. The X.25 people were trying to plan and anticipate every possible problem and application. They developed complex and extremely well- thought-out standards that the largest and most established research labs and companies would render into software and hardware.
The Internet, on the other hand, was being designed and deployed by small groups of researchers following the credo “rough consensus and running code,” coined by one of its chief architects, David Clark. Instead of a large inter-governmental agency, the standards of the Internet were stewarded by small organizations, which didn’t require permission or authority. It functioned by issuing the humbly named “Request for Comment” or RFCs as the way to propose simple and light-weight standards against which small groups of developers could work on the elements that together became the Internet.'
The telecommunications industry saw the design and construction of the next-generation Internet as a big breakthrough. The academics saw it as a combinatorial endeavor.
TCP/IP, the protocol for transmitting data championed by the researchers, won out in the end. Undeniably, we are better off as result. TCP/IP’s simplicity allowed all kinds of organizations to implement it quickly. Its openness allowed anyone to connect freely and inexpensively. The ad hoc nature of its ongoing refinement encouraged the best and brightest minds contribute to making it better. But most importantly, freeing itself of the need to anticipate every possible use or flaw, it allowed people to experiment. It’s questionable whether the things that make the Internet so valuable today—the Web, Voice over IP, social networks—could have evolved in a network so rigidly defined by the telecommunications industry. The technical, social, and economic evolution of Internet was, Ito argues, a “triumph of distributed innovation over centralized innovation.”39
Which style of innovation is right for smart cities?
There are aspects of what Cisco, IBM, Siemens, and other technology giants are planning for smart cities that aspire to breakthrough status. They are weaving an array of new technologies—the Internet of Tilings, predictive analytics, and ubiquitous video communications—into the city on the scale of the electrical grid a century ago. If they succeed in their ambitions, Cowen will be hard-pressed to deny it. But much of what they have done to date is simply cobble together solutions from off- the-shelf components, with little investment in research and development of new core technologies. It is, in a way, the spitting image of combinatorial innovation.
More worryingly, though, the technology giants are out of sync with what we know about how cities need to evolve, at least in part, from the bottom up. They are making choices, about technology, business, and governance, with little or no input from the broader community of technologists, civic leaders, and citizens themselves. That is holding them back. Smart cities could also evolve from the bottom up, if we let them. Both the evolution of the Internet, and the history of city planning, shows us that.
But it is also crucial to recognize that the Internet didn’t just emerge out of thin air. The US government played a huge role in kick-starting it. As Los Angeles Times columnist Michael Hiltzik wrote, “Private enterprise had no interest in something so visionary and complex, with questionable commercial opportunities. Indeed, the private corporation that then owned monopoly control over America’s communications network, AT&T, fought tooth and nail against the ARPANet,” the Defense Department’s research network that pioneered the technologies that power the Internet.40 One can find National Science Foundation research grants in the DNA of almost every major advance in the software, hardware, and network designs that power the Internet today.
This is a dilemma that poses some tough choices. Do we try to pick winners and rally our efforts behind a handful of big transformative projects? Some parts of the smart city, such as reengineering the electric power grid, seem to call for Apollo program-scale breakthroughs. Most of the rest is pretty unclear. Should we instead focus on laying the foundations for a diversity of experimentation to unfold, as we did with the Web? Or, if we do both, how do we balance the two and tie them together in productive ways? None of the answers are obvious yet.
Читать дальше