As for writing information to the brain, the most common neuroprosthetic device to date is the cochlear implant, in which deafness is alleviated by a device attached to the skull which directly stimulates the part of the cortex that controls hearing: “writing” a signal derived from auditory data to the appropriate part of the brain. There are also neuroprosthetic devices to restore vision, including retinal implants.
To be able to achieve such feats, you have to be able to understand the brain’s coding of the data it uses: how the firing of a particular set of neurons in a particular way is related to a particular movement of the arm, say. But experiments are proceeding worldwide on reading and understanding motor-control signals, and much more subtle signals, involving mental states associated with language, for example. These are still-tentative steps to something like true mind-reading.
The U.S. military is interested in this kind of technology; the defence research agency DARPA announced a research programme in March 2010. There are ethical concerns however about using such technologies to go beyond meeting clinical needs to enhancing human abilities beyond the natural limits.
Most of these experiments involve invasive procedures, in which the patient’s head is literally invaded by bits of wire. Jake’s scanning is non-invasive—no wires. Is this possible? We do have non-invasive neuroimaging technologies. Techniques include electroencephalography (EEG), the reading of brain waves (which dates back to the 1920s), and magneto-encephalography (MEG) and functional magnetic resonance imaging (fMRI), which are capable of producing three-dimensional images of the brain’s electrical activity. The latter techniques exploit the fact that charged particles, such as those passing between neurons in a brain, give off radiation when moving in a strong magnetic field: signals that can be picked up and analysed. Resolution is a problem; the skull itself dampens signals and blurs the neurons’ signals. Progress is being made. A company called G.Tec, based in Austria, already has a non-invasive system that allows users to control avatars in Second Life. Non-invasiveness only adds to the technical hurdles involved in hacking into the brain.
But even if Jake’s brain is read and written to non-invasively by scanners in the link unit, how is the avatar’s brain accessed? This is the other end of the link, after all, and data must be uploaded and downloaded to it at the same rate as to and from Jake’s brain. In this case the interfacing technology is contained inside the avatar’s brain. As the avatar body is being grown in its tank, the brain is grown with a reception node embedded in its cortex. We haven’t got this far in reality, but there have been experiments with “partially invasive BCIs,” where you lay a thin plastic pad full of sensors within the skull, but outside the brain.
Brain hacking is clearly a tremendous challenge, on which we’ve made barely a start. In the movie, the use of the word “psionic” in the description of the link technology is telling. “Psionics” is generally taken to mean the study of paranormal powers of the mind, such as telepathy, telekinesis, precognition and so forth. It seems to have been coined by science fiction editor John W. Campbell as a fusion of “psi” from psyche, and “onics” from words like electronics, to imply a more scientific framing of the subject. Perhaps we can infer from the use of that word that the science of the twenty-second century has advanced far beyond what is known now; perhaps there are principles at work in the link units of which we have no knowledge.
We can however assume that the link process will be mediated by a computer system vastly more powerful than either Jake’s brain or the avatar’s. The enormous artificial intelligences of the future, as predicted by Moore’s Law, will not be baffled by the computational size of the brain, nor, I would guess, by the challenge of decoding the brain’s many signals. It will be like managing the problem of interfacing an Apple Mac to a Microsoft PC by connecting them both up to that monster Chinese “Milky Way” supercomputer.
And if brain hacking does become possible many remarkable applications open up, beyond the driving of avatars. Fully immersive virtual reality, where we started this discussion, would become trivially easy. Roaming around inside the tremendous computer memories of the future, you could have any experience you want, real or fantastic, as richly detailed as the real world, and you could run them at any speed (compared to real life) as you liked: a twelve-year trip to Pandora and back crammed into a morning coffee-break. If you suffer from “ Avatar withdrawal” after watching a mere movie, you might never want to come out of a simulation like that at all.
And VR might become so good that you couldn’t tell what is real and what is virtual, like the characters in the movie The Matrix . I’ve suggested myself that one resolution of the Fermi Paradox (see Chapter 26) is that we’re stuck inside a virtual reality suite run by the aliens, to hide the real universe. Oxford-based philosopher Nick Bostrom says that not only is it possible that we’re living in a virtual reality generated by some advanced culture, it is probable that we are—there are always going to be more copies than the original reality, so it’s more likely you’ll find yourself inside a copy than the original…
We’ve come a long way with this speculation, but we haven’t yet got to the bottom of the mystery of Jake’s mind-linking. For he is interfacing with a body quite unlike his own. And that presents yet more fascinating challenges.
33
WHAT IS IT LIKE TO BE A NA’VI?
There are lots of subtleties in the way Jake Sully’s mind would have to be mapped into the avatar’s brain, beyond the issues of coding, data transfer rates and all the other information-technology stuff we touched on in the last chapter.
An avatar body is more like a Na’vi’s than a human’s. So to run his avatar, Jake, a human being, has to learn how to be a Na’vi.
I find it a lot easier to imagine that I could drive a fully human avatar than that I could drive an avatar of a Na’vi. Or indeed, an avatar of my own little dog.
For one thing, I’m well aware that my dog doesn’t see the world as I do. This is evident when we watch TV, at least on an old analogue set. Such sets present a series of still images quickly enough to fool the human eye into thinking it’s seeing continuous motion. But my dog’s eyes were evolved for a subtly different purpose than mine, and their “flicker-fusion rate” is faster than mine. He can see the individual frames, and indeed the blanks between them, and so to him the TV screen is like a dance floor under a strobe light. That’s why an analogue TV never captures his interest (but digital sets remove the flicker-fusion problem, and the dog is fascinated, at least by programmes featuring other dogs).
If this is a challenge for my little dog and me, who as mammals are pretty close relatives in the grander scheme of Earth’s family of life, it’s going to be ten times more difficult for Jake and his avatar. After all, Jake and the Na’vi are from different worlds altogether.
The sensory functions of Jake and his avatar overlap, but not completely. For example a Na’vi’s sight goes beyond the human range, into the near infrared, to allow night vision. This provides input which has no analogue in the human sensorium. You could imagine transforming the input images somehow so that they map over the human range; it might be like wearing a soldier’s infrared vision enhancer in a combat zone, and having its images superimposed over the visuals in a heads-up display. But enhancements like that would provide an entirely artificial picture, and would be nothing like what the Na’vi actually sees. Jake has to learn to see like a Na’vi, not like a human with enhancing goggles.
Читать дальше