If a machine can prove indistinguishable from a human, we should award it the respect we would to a human—we should accept that it has a mind.
Stevan Harnad
T he most significant source of objection to my thesis on the law of accelerating returns and its application to the amplification of human intelligence stems from the linear nature of human intuition. As I described earlier, each of the several hundred million pattern recognizers in the neocortex processes information sequentially. One of the implications of this organization is that we have linear expectations about the future, so critics apply their linear intuition to information phenomena that are fundamentally exponential.
I call objections along these lines “criticism from incredulity,” in that exponential projections seem incredible given our linear predilection, and they take a variety of forms. Microsoft cofounder Paul Allen (born in 1953) and his colleague Mark Greaves recently articulated several of them in an essay titled “The Singularity Isn’t Near” published in Technology Review magazine. 1 While my response here is to Allen’s particular critiques, they represent a typical range of objections to the arguments I’ve made, especially with regard to the brain. Although Allen references The Singularity Is Near in the title of his essay, his only citation in the piece is to an essay I wrote in 2001 (“The Law of Accelerating Returns”). Moreover, his article does not acknowledge or respond to arguments I actually make in the book. Unfortunately, I find this often to be the case with critics of my work.
When The Age of Spiritual Machines was published in 1999, augmented later by the 2001 essay, it generated several lines of criticism, such as: Moore’s law will come to an end; hardware capability may be expanding exponentially but software is stuck in the mud; the brain is too complicated; there are capabilities in the brain that inherently cannot be replicated in software; and several others. One of the reasons I wrote The Singularity Is Near was to respond to those critiques.
I cannot say that Allen and similar critics would necessarily have been convinced by the arguments I made in that book, but at least he and others could have responded to what I actually wrote. Allen argues that “the Law of Accelerating Returns (LOAR)…is not a physical law.” I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a lower level. A classic example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, it models each particle as following a random walk, so by definition we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are quite predictable to a high degree of precision, according to the laws of thermodynamics. So it is with the law of accelerating returns: Each technology project and contributor is unpredictable, yet the overall trajectory, as quantified by basic measures of price/performance and capacity, nonetheless follows a remarkably predictable path.
If computer technology were being pursued by only a handful of researchers, it would indeed be unpredictable. But it’s the product of a sufficiently dynamic system of competitive projects that a basic measure of its price/performance, such as calculations per second per constant dollar, follows a very smooth exponential path, dating back to the 1890 American census as I noted in the previous chapter. While the theoretical basis for the LOAR is presented extensively in The Singularity Is Near , the strongest case for it is made by the extensive empirical evidence that I and others present.
Allen writes that “these ‘laws’ work until they don’t.” Here he is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining, for example, the trend of creating ever smaller vacuum tubes—the paradigm for improving computation in the 1950s—it’s true that it continued until it didn’t. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm. The technology of transistors kept the underlying trend of the exponential growth of price/performance of computation going, and that led to the fifth paradigm (Moore’s law) and the continual compression of features on integrated circuits. There have been regular predictions that Moore’s law will come to an end. The semiconductor industry’s “International Technology Roadmap for Semiconductors” projects seven-nanometer features by the early 2020s. 2 At that point key features will be the width of thirty-five carbon atoms, and it will be difficult to continue shrinking them any farther. However, Intel and other chip makers are already taking the first steps toward the sixth paradigm, computing in three dimensions, to continue exponential improvement in price/performance. Intel projects that three-dimensional chips will be mainstream by the teen years; three-dimensional transistors and 3-D memory chips have already been introduced. This sixth paradigm will keep the LOAR going with regard to computer price/performance to a time later in this century when a thousand dollars’ worth of computation will be trillions of times more powerful than the human brain. 3 (It appears that Allen and I are at least in agreement on what level of computation is required to functionally simulate the human brain.) 4
Allen then goes on to give the standard argument that software is not progressing in the same exponential manner as hardware. In The Singularity Is Near I addressed this issue at length, citing different methods of measuring complexity and capability in software that do demonstrate a similar exponential growth. 5 One recent study (“Report to the President and Congress, Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology,” by the President’s Council of Advisors on Science and Technology) states the following:
Even more remarkable—and even less widely understood—is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed . The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade…. Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later—in 2003—this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008. The design and analysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental subfields of computer science.
Note that the linear programming that Grötschel cites above as having benefited from an improvement in performance of 43 million to 1 is the mathematical technique that is used to optimally assign resources in a hierarchical memory system such as HHMM that I discussed earlier. I cite many other similar examples like this in The Singularity Is Near . 6
Читать дальше