But the speed and number of switches are not the only aspects of computers that are affected by the size of the basic components. The heat generated by all that switching is another aspect, and it is also a major problem in computer design. Big components tend to get hot, especially when packed together in large numbers in an enclosed space, and that means that electronic computers are limited in their size and power. The first computers were built out of vacuum tubes, which were very hot and very big. No computer with a capacity that surpassed a certain threshold amount of computing power could be devised by using vacuum tubes. Such machines would melt before anything substantially useful could be done with them.
Not long after the first tube computers reached their heat limit, however, a discovery in the field of subatomic physics made considerably more efficient switching elements possible. In the late 1940s the transistor was invented, and that meant that computers could be constructed from elements that were much smaller and much cooler than the old tube technology. As a consequence, “smarter” computers could be built—computers that could follow more complex strings of instructions.
Besides their cool and rapid manner of operating, transistors had another advantage over the old computer elements—they were cheaper than tubes. Indeed, a paradoxical phenomenon has governed the evolution of computer technology: As computer components became smaller, cooler, and more powerful, they also became cheaper. They also happened to come along at the perfect time. The pressure to develop electronics and computer technology to their furthest limits, and substantial financial resources to support large-scale research and development efforts, were provided by two of the most powerful institutions in history—the Defense Department of the United States government and IBM. In the years that followed World War II, this fortuitous combination of fundamental scientific breakthroughs, breakneck engineering, and unprecedented economic benefits eventually made computer technology an integral part of all the levels of society.
Computers are valuable because they multiply the power of the human brain, in the way levers are useful tools for multiplying the power of the human arm. A lever, however, is only for moving a large object, but a computer can do much more than that. It can command machines to move large objects, or it can perform mathematical calculations, or it can put words on a screen. It is an all-purpose tool that empowers anyone who can afford to use it. In fact, the sudden empowerment made possible by available computers has happened so fast that our society has barely begun to feel its impact. As computers have become cheaper, in terms of computations per dollar, the computer-using population has expanded dramatically.
In the 1950s, computers more powerful than those used previously by the Defense Department were being installed by large institutions like banks and corporations. By the 1970s, computers of even greater power were being used by small businesses. Now, in the 1980s, middle-class households can afford computers that only the national government could afford to build thirty years ago, and it is reasonable to expect that people thirty years from now will be able to afford computers that are as powerful as the mightiest supercomputers being used today.
While computers kept getting smaller, more powerful, and cheaper throughout the 1950s and 1960s, they were still too expensive to be made accessible to the exclusive use of one person. By 1970, a computer still couldn’t be called an affordable device for individuals, but its cost had fallen from millions of dollars to thousands. By the early and mid-1970s, another series of breakthroughs in miniaturization was underway. Researchers for electronic companies had already discovered ways to put thousands of components into ultraminiature circuits known as “integrated circuits”—or, as they came to be known, “chips.” These chips made all kinds of electronic devices possible—satellite communications, cheap color televisions, stereos, and radios, as well as personal computers.
In 1969, engineers at Intel Corporation designed a chip that had all the switching elements needed for a computer’s central processing unit—the historic 4004 chip. In 1972, a somewhat more powerful version—the 8008—was developed by the same engineers. While the 4004 could handle information only in 4-bit chunks, the 8008 was a true 8-bit processor, and this boosted the device’s potential applications from the realm of calculators to the world of true computers.
The 4004 and the 8008 were the first microprocessors — electronic devices capable of processing information—but they were not quite computers, which are information-processing machines that must possess specific capabilities. The 8008 had the basic information-processing capability and the built-in “language” of instructions that could enable it to become a computer, but other devices had to be connected to the chip in order for people to actually create and use programs. This wasn’t a simple matter; you had to have a pretty advanced knowledge of electronics to assemble the different parts of the computer.
Still, a subtle but crucial shift in the course of events was triggered by these devices, although only a few people recognized their significance when they were created. In fact, neither the world at large nor the electronics world in particular heralded the arrival of the Intel 4004. Intel was just looking for a new kind of chip that the company could sell to all the other companies that make consumer devices out of microelectronic chips.
In any case, at this point in the story we are still talking about hardware expertise, but now we are beginning to talk about computer designers, not just electronic component manufacturers, for the microprocessor was the first electronic computer technology cheap enough to make it possible for ordinary people to afford relatively powerful computers (although, as we shall soon see, the first people to use these homebrew computers were far from ordinary).
The microprocessor has often been called “a computer on a chip,” which is slightly misleading, since it isn’t possible to use one of these chips as a real working computer without connecting it to additional electronic equipment. That is where MITS and the homebrewers came in. Ed Roberts, the owner of MITS, entered the annals of computer legend when he decided to build a kit for putting a microprocessor together with all the other necessary components. Little did he know that there was a vast, previously unknown market for these devices. Hundreds of young computer enthusiasts across the country were fiercely determined to get their hands on real working personal computers. The year was 1974.
Roberts hadn’t started out to be a computer entrepreneur. He had originally wanted to be a doctor, and last I heard, about a year ago, he actually was in medical school in Florida. But he received electronics training in the Air Force, and in the late 1960s he started his own company and sold radio equipment to model airplane hobbyists—hence the name “Microelectronic Instrumentation and Telemetry Systems.”
Before the Altair kit came along, MITS faced some rocky times, especially when Roberts decided to get into the calculator business at precisely the wrong time to compete with the Texas Instruments juggernaut. But he moved on to microprocessors and shopped around for a better chip than the 8008. The problem with the 8008 was the way its instruction set hampered the efforts of programmers. He finally purchased a quantity of Intel’s successor to that chip, the 8080, for $75 apiece. The price was right, and the 8080 instruction set was far more amenable to computer software design.
Читать дальше