I would also provide a module that identifies open questions in every discipline. As another continual background task, it would search for solutions to them in other disparate areas of knowledge. As I noted, the knowledge in the neocortex consists of deeply nested patterns of patterns and is therefore entirely metaphorical. We can use one pattern to provide a solution or insight in an apparently disconnected field.
As an example, recall the metaphor I used in chapter 4 relating the random movements of molecules in a gas to the random movements of evolutionary change. Molecules in a gas move randomly with no apparent sense of direction. Despite this, virtually every molecule in a gas in a beaker, given sufficient time, will leave the beaker. I noted that this provides a perspective on an important question concerning the evolution of intelligence. Like molecules in a gas, evolutionary changes also move every which way with no apparent direction. Yet we nonetheless see a movement toward greater complexity and greater intelligence, indeed to evolution’s supreme achievement of evolving a neocortex capable of hierarchical thinking. So we are able to gain an insight into how an apparently purposeless and directionless process can achieve an apparently purposeful result in one field (biological evolution) by looking at another field (thermodynamics).
I mentioned earlier how Charles Lyell’s insight that minute changes to rock formations by streaming water could carve great valleys over time inspired Charles Darwin to make a similar observation about continual minute changes to the characteristics of organisms within a species. This metaphor search would be another continual background process.
We should provide a means of stepping through multiple lists simultaneously to provide the equivalent of structured thought. A list might be the statement of the constraints that a solution to a problem must satisfy. Each step can generate a recursive search through the existing hierarchy of ideas or a search through available literature. The human brain appears to be able to handle only four simultaneous lists at a time (without the aid of tools such as computers), but there is no reason for an artificial neocortex to have such a limitation.
We will also want to enhance our artificial brains with the kind of intelligence that computers have always excelled in, which is the ability to master vast databases accurately and implement known algorithms quickly and efficiently. Wolfram Alpha uniquely combines a great many known scientific methods and applies them to carefully collected data. This type of system is also going to continue to improve given Dr. Wolfram’s observation of an exponential decline in error rates.
Finally, our new brain needs a purpose. A purpose is expressed as a series of goals. In the case of our biological brains, our goals are established by the pleasure and fear centers that we have inherited from the old brain. These primitive drives were initially set by biological evolution to foster the survival of species, but the neocortex has enabled us to sublimate them. Watson’s goal was to respond to Jeopardy! queries. Another simply stated goal could be to pass the Turing test. To do so, a digital brain would need a human narrative of its own fictional story so that it can pretend to be a biological human. It would also have to dumb itself down considerably, for any system that displayed the knowledge of, say, Watson would be quickly unmasked as nonbiological.
More interestingly, we could give our new brain a more ambitious goal, such as contributing to a better world. A goal along these lines, of course, raises a lot of questions: Better for whom? Better in what way? For biological humans? For all conscious beings? If that is the case, who or what is conscious?
As nonbiological brains become as capable as biological ones of effecting changes in the world—indeed, ultimately far more capable than unenhanced biological ones—we will need to consider their moral education. A good place to start would be with one old idea from our religious traditions: the golden rule.
CHAPTER 8
THE MIND AS COMPUTER
Shaped a little like a loaf of French country bread, our brain is a crowded chemistry lab, bustling with nonstop neural conversations. Imagine the brain, that shiny mound of being, that mouse-gray parliament of cells, that dream factory, that petit tyrant inside a ball of bone, that huddle of neurons calling all the plays, that little everywhere, that fickle pleasuredome, that wrinkled wardrobe of selves stuffed into the skull like too many clothes into a gym bag.
Diane Ackerman
Brains exist because the distribution of resources necessary for survival and the hazards that threaten survival vary in space and time.
John M. Allman
The modern geography of the brain has a deliciously antiquated feel to it—rather like a medieval map with the known world encircled by terra incognita where monsters roam.
David Bainbridge
In mathematics you don’t understand things. You just get used to them.
John von Neumann
E ver since the emergence of the computer in the middle of the twentieth century, there has been ongoing debate not only about the ultimate extent of its abilities but about whether the human brain itself could be considered a form of computer. As far as the latter question was concerned, the consensus has veered from viewing these two kinds of information-processing entities as being essentially the same to their being fundamentally different. So is the brain a computer?
When computers first became a popular topic in the 1940s, they were immediately regarded as thinking machines. The ENIAC, which was announced in 1946, was described in the press as a “giant brain.” As computers became commercially available in the following decade, ads routinely referred to them as brains capable of feats that ordinary biological brains could not match.

A 1957 ad showing the popular conception of a computer as a giant brain.
Computer programs quickly enabled the machines to live up to this billing. The “general problem solver,” created in 1959 by Herbert A. Simon, J. C. Shaw, and Allen Newell at Carnegie Mellon University, was able to devise a proof to a theorem that mathematicians Bertrand Russell (1872–1970) and Alfred North Whitehead (1861–1947) had been unable to solve in their famous 1913 work Principia Mathematica . What became apparent in the decades that followed was that computers could readily significantly exceed unassisted human capability in such intellectual exercises as solving mathematical problems, diagnosing disease, and playing chess but had difficulty with controlling a robot tying shoelaces or with understanding the commonsense language that a five-year-old child could comprehend. Computers are only now starting to master these sorts of skills. Ironically, the evolution of computer intelligence has proceeded in the opposite direction of human maturation.
The issue of whether or not the computer and the human brain are at some level equivalent remains controversial today. In the introduction I mentioned that there were millions of links for quotations on the complexity of the human brain. Similarly, a Google inquiry for “Quotations: the brain is not a computer” also returns millions of links. In my view, statements along these lines are akin to saying, “Applesauce is not an apple.” Technically that statement is true, but you can make applesauce from an apple. Perhaps more to the point, it is like saying, “Computers are not word processors.” It is true that a computer and a word processor exist at different conceptual levels, but a computer can become a word processor if it is running word processing software and not otherwise. Similarly, a computer can become a brain if it is running brain software. That is what researchers including myself are attempting to do.
Читать дальше