Three years into my discovery of history, it was announced that Soviet ballistic missiles had been deployed in Cuba. My encounter with history, I absolutely knew, was about to end then, and perhaps my species with it.
In his preface to the 1921 edition of The War in the Air , Wells wrote of World War I (still able to call it, then, the Great War): “The great catastrophe marched upon us in daylight. But everybody thought that somebody else would stop it before it really arrived. Behind that great catastrophe march others today.” In his preface to the 1941 edition, he could only add: “Again I ask the reader to note the warnings I gave in that year, twenty years ago. Is there anything to add to that preface now? Nothing except my epitaph. That, when the time comes, will manifestly have to be: ‘I told you so. You damned fools.’ (The italics are mine.)”
The italics are indeed his: the terminally exasperated visionary, the technologically fluent Victorian who has watched the twentieth century arrive, with all of its astonishing baggage of change, and who has come to trust in the minds of the sort of men who ran British Rail. They are the italics of the perpetually impatient and somehow perpetually unworldly futurist, seeing his model going terminally wrong in the hands of the less clever, the less evolved. And they are with us today, those italics, though I’ve long since learned to run shy of science fiction that employs them.
I suspect that I began to distrust that particular flavor of italics when the world didn’t end in October of 1962. I can’t recall the resolution of the Cuban Missile Crisis at all. My anxiety, and the world’s, reached some absolute peak. And then declined, history moving on, so much of it, and sometimes today the world of my own childhood strikes me as scarcely less remote than the world of Wells’s childhood, so much has changed in the meantime.
I may actually have begun to distrust science fiction, then, or rather to trust it differently, as my initial passion for it began to decline, around that time. I found Henry Miller, then, and William Burroughs, Jack Kerouac, and others, voices of another kind, and the science fiction I continued to read was that which somehow was resonant with those other voices, and where those voices seemed to be leading me.
And it may also have begun to dawn on me, around that same time, that history, though initially discovered in whatever soggy trunk or in whatever caliber, is a species of speculative fiction itself, prone to changing interpretation and further discoveries.
This is a much more directly autobiographical piece than I’m ordinarily prone to, and the result of a failed project. I had been commissioned to write an introduction for a new edition of H. G. Wells’s The Time Machine, and found myself unable to complete it to what I imagined would be the publisher’s expectations. It was supposed to be about Wells, not about me, yet this personal narrative kept shouldering aside my not very effective attempts to sound like an academic historian of science fiction (probably because I am so thoroughly not that).
Will We Have Computer Chips in Our Heads?
MAYBE.
But only once or twice, and probably not for very long.
The cyberpunk hard guys of science fiction, with their sharp black suits and their surgically implanted silicon chips, already have a certain nostalgic romance about them. Information highwaymen, cousins of the “steam bandits” of Victorian techno-fiction: so heroically attuned to the new technology that they have laid themselves open to its very cutting edge. They have become it; they have taken it within themselves.
Meanwhile, in case you somehow haven’t noticed, we are all of us becoming it; we seem to have no choice but to take it within ourselves.
In hindsight, the most memorable images of science fiction often have more to do with our anxieties in the past (the writer’s present) than with those singular and ongoing scenarios that make up our life as a species: our real futures, our ongoing present.
Many of us, even today, or most particularly today, must feel as though we have silicon chips embedded in our brains. Some of us, certainly, are not entirely happy with that feeling. Some of us must wish that ubiquitous computating would simply go away and leave us alone, a prospect that seems increasingly unlikely.
But that does not, I think, mean that we will one day, as a species, submit to the indignity of the chip. If only because the chip will almost certainly be as quaint an object as the vacuum tube or the slide rule.
From the viewpoint of bioengineering, a silicon chip is a large and rather complex shard of glass. Inserting a silicon chip into the human brain involves a certain irreducible inelegance of scale. It’s scarcely more elegant, relatively, than inserting a steam engine into the same tissue. It may be technically possible, but why should we even want to attempt such a thing?
I suspect that medicine and the military will both find reasons for attempting such a thing, at least in the short run, and that medicine’s reasons may at least serve to counter someone’s acquired or inherited disability. If I were to lose my eyes, I would quite eagerly submit to some sort of surgery promising a video link to the optic nerves (and once there, why not insist on full-channel cable and a Web browser?). The military’s reasons for insertion would likely have something to do with what I suspect is the increasingly archaic job description of “fighter pilot,” or with some other aspect of telepresent combat, in which weapons in the field are remotely controlled by distant operators. At least there’s still a certain macho frisson to be had in the idea of deliberately embedding a tactical shard of glass in one’s head, and surely crazier things have been done in the name of king and country.
But if we do do it, I doubt we’ll be doing it for very long, as various models of biological and nanomolecular computing are looming rapidly into view. Rather than plug a piece of hardware into our gray matter, how much more elegant to extract some brain cells, plop them into a Petri dish, and graft on various sorts of gelatinous computing goo. Slug it all back into the skull and watch it run on blood sugar, the way a human brain’s supposed to. Get all the functions and features you want, without that clunky-junky twentieth-century hardware thing. You really don’t need complicated glass to crunch numbers, and computing goo probably won’t be all that difficult to build. (The more tricky aspect here may be turning data into something that brain cells understand. If you knew how to make brain cells understand pull-down menus, you’d probably know everything you needed to know about brain cells, period. But we are coming to know, relatively, an awful lot about brain cells.)
Our hardware is likely to turn into something like us a lot faster than we are likely to turn into something like our hardware. Our hardware is evolving at the speed of light, while we are still the product, for the most part, of unskilled labor.
But there is another argument against the need to implant computing devices, be they glass or goo. It’s a very simple one, so simple that some have difficulty grasping it. It has to do with a certain archaic distinction we still tend to make, a distinction between computing and “the world.” Between, if you like, the virtual and the real.
Читать дальше