The problem for hackers in an age of a changing marketplace and maturing technology is the demand for programs that are far more complex than those that were happily accepted by the software buying public only a few years earlier. Hackers tend to be short-haul drivers—they aren’t usually organized enough and cooperative enough to participate in the kind of structured group effort needed to build an efficient large program.
Furthermore, the evolution of more powerful machines has made the hackers’ favorite corner-cutting techniques obsolete. A talent for cramming code into a small memory space is no longer valuable now that huge memory spaces have become economically feasible. It used to take a hacker to make a microcomputer dance. That situation will change as the hardware evolves. Hackers will always be in style as long as computers based on 8-bit processors continue to be built. But there is so much power under the hood of a 16-bit Macintosh that a mere BASIC programmer can now put on a stunning display of visual pyrotechnics.
The new corporate culture that has taken over most of the software world has brought different values that are often incompatible with the ones hackers remember from the early days. A software publisher can hope to survive and thrive only by making each program compatible across a spectrum of different machines and peripheral devices. This kind of time-dependent compatibility (you have to rush out the different versions of your latest hit program before your imitators can beat you to it) means that programming teams have to adhere to certain programming standards. And such standards are anathema to most dyed-in-the-wool hackers.
Support and revision of programs by people other than the original programmer are another requirement of today’s highly competitive market, which is dictated by customers who no longer tolerate bugs nor attempt to fix programs themselves. These requirements can be met only if the original programmer observes certain protocols of programming style. In fact, projects get so big that they are designed and implemented by teams using sophisticated software tools designed by other teams. Programming protocols are therefore crucial in order to sustain this team interaction.
One day soon, the independent hackers will be forced to button down to stay in the ball game, or they will be forced to retreat from the mainstream of microcomputer software publishing and might end up without any niche—unless new and unexplored software territory opens up. In that event, pioneers are always needed to map the terrain and build the first roads into new computer technologies. As long as the capabilities of computer technology continue to expand beyond our present uses, we will need adventurous, perhaps undisciplined, trailblazers to chart the new capabilities for the rest of us.
What are the most likely new developments to come? Some of the hardware innovations of the next few years amount to a change in the quantity of something—the amount of memory, the degree of resolution of the display, the speed of a microprocessor. But a dramatic change in quantity can often make for radical changes in quality. (Microcomputer guru Alan Kay notes that when you put still photographs on a screen at a rate of twenty-four frames per second, you not only get more images, you get moving images—a quantitative change that creates a significant qualitative change.) Larger, faster memory technologies are the near-future developments that are most likely to make practical certain kinds of programs that were not before.
The microcomputer industry expanded to a whole new market when the older, slower cassette-storage technology changed to a faster, higher-capacity disk-storage technology in the early 1980s. Similarly, the advent of optically based storage peripherals, vertical magnetic storage, networking, and high-capacity memory chips with far greater speed and storage capacity than today’s most advanced disk drives and RAM memories will permit practical mass-market use of computer applications that are now in the experimental (and expensive) stage. Among the most feasible applications will be even better high-resolution graphics, as well as practical and workable technologies such as speech recognition and synthesis by computers. Together with new kinds of human-computer interfaces—the means by which people command computers to carry out tasks—new voice and visual techniques are bound to revolutionize software.
Perhaps we are nearing the day when we may actually share in another person’s adventurous experiences in the way futurists such as Aldous Huxley described in books like Brave New World. In other words, perhaps instead of movies, there will be such phenomena as “feelies,” theater-like sensoria where one not only sees and hears what the characters in the films might have seen and heard, but also feels what they touch and smells what they smell. No such man/machine interface is currently under commercial development as far as I know (although top-secret defense technology is generally years ahead of commercial developments), but major increases in the quantity and quality of information that can be shared between humans and machines can be expected in the near future.
In the realm of machines like computers, the key to human-computer communication is that measure known as bandwidth— the capacity to send and receive large amounts of information back and forth very quickly. Humans are visually oriented creatures, and we have much larger capacities for individual information input than are being used by current computer displays. Simply look at the difference between a color television program and even the best microcomputer graphics and you will see the kinds of changes in visual displays that an increase in processing speed and storage capacity ( i.e ., bandwidth) will bring in the near future.
And as our capacity to take in information from the computer expands because of advances in visual displays, our ability to put information into the computer will be hugely augmented by advances in voice recognition technology. But before voice recognition input technology—the capability of a computer to recognize and act upon verbal commands—will come a somewhat less difficult output technology, voice synthesis—the capability of a computer to generate an accurate facsimile of a human voice. This is truly one of the next areas for commercial breakthrough.
Voice synthesis, once deemed a very difficult problem, is virtually solved on the technical level; it needs only to become economically feasible for the mass market. Special sound-generation chips and speech synthesis software has already brought speech generation to the edge of commercial feasibility, and it is-already being used, for example, to automate many of the telephone company’s directory assistance services. The next time you ask for a listing, you will probably hear a computer-generated voice. Microcomputer versions of speech generation systems are already used by visually handicapped computerists, and in a few years “talking computers” will become more widely used.
Speech recognition—a means by which computers can understand a large number of spoken commands—is a more difficult scientific and software problem than voice synthesis. The problem is that the spoken word is terribly hard to decipher if you don’t know what is being said. Speech is highly ambiguous because many words sound alike, so humans use a lot of contextual information to translate speech—a feat our brains do without our conscious awareness. For computers, however, the ability to know what is being said in every possible circumstance requires artificial intelligence techniques that are far from being perfected. In order to get a computer to recognize a human-sized vocabulary, very sophisticated, very complicated software must be created. And that requires large memories and fast processors. However, smaller vocabularies are an easier problem, and voice-commanded microcomputers will become commercially feasible very soon.
Читать дальше