The scenarios discussed in the last section are sufficiently near term that we need to plan for them and adjust to them. But what about the longer-term prospects? These are murkier, and there is no consensus among experts on the speed of advance in machine intelligence—and indeed on what the limits to AI might be. It seems plausible that an AI linked to the internet could ‘clean up’ on the stock market by analysing far more data far faster than any human. To some extent this is what quantitative hedge funds are already doing. But for interactions with humans, or even with the complex and fast-changing environment encountered by a driverless car on an ordinary road, processing power is not enough; computers would need sensors that enable them to see and hear as well as humans do, and the software to process and interpret what the sensors relay.
But even that would not be sufficient. Computers learn from a ‘training set’ of similar activities, where success is immediately ‘rewarded’ and reinforced. Game-playing computers play millions of games; photo-interpreting computers gain expertise by studying millions of images; for driverless cars to achieve this expertise, they would need to communicate with one another, to share and update their knowledge. But learning about human behaviour involves observing actual people in real homes or workplaces. The machine would feel sensorily deprived by the slowness of real life and would be bewildered. To quote Stuart Russell, a leading AI theorist, ‘it could try all kinds of things: scrambling eggs, stacking wooden blocks, chewing wires, poking its finger into electric outlets. But nothing would produce a strong enough feedback loop to convince the computer it was on the right track and lead it to the next necessary action’. [12]
Only when this barrier can be surmounted will AIs truly be perceived as intelligent beings, to which (or to whom) we can relate, at least in some respects, as we do to other people. And their far faster ‘thoughts’ and reactions could then give them an advantage over us.
Some scientists fear that computers may develop ‘minds of their own’ and pursue goals hostile to humanity. Would a powerful futuristic AI remain docile, or ‘go rogue’? Would it understand human goals and motives and align with them? Would it learn enough ethics and common sense so that it ‘knew’ when these should override its other motives? If it could infiltrate the internet of things, it could manipulate the rest of the world. Its goals may be contrary to human wishes, or it may even treat humans as encumbrances. AI must have a ‘goal’, but what really is difficult to instil is ‘common sense’. AI should not pursue its goal obsessively and should be prepared to desist from its efforts rather than violating ethical norms.
Computers will vastly enhance mathematical skills, and perhaps even creativity. Already our smartphones substitute for routine memory storage and give near-instant access to the world’s information. Soon translation between languages will be routine. The next step could be to ‘plug in’ extra memory or acquire language skills by direct input into the brain—though the feasibility of this isn’t clear. If we can augment our brains with electronic implants, we might be able to download our thoughts and memories into a machine. If present technical trends proceed unimpeded, then some people now living could attain immortality—at least in the limited sense that their downloaded thoughts and memories could have a life span unconstrained by their present bodies. Those who seek this kind of eternal life will, in old-style spiritualist parlance, ‘go over to the other side’.
We then confront the classic philosophical problem of personal identity. If your brain were downloaded into a machine, in what sense would it still be ‘you’? Should you feel relaxed about your body then being destroyed? What would happen if several ‘clones’ were made of ‘you’? And is the input into our sense organs, and physical interactions with the real external world, so essential to our being that this transition would be not only abhorrent but also impossible? These are ancient conundrums for philosophers, but practical ethicists may soon need to address them because they might be relevant to choices that real humans will make within this century.
In regard to all these post-2050 speculations, we don’t know where the boundary lies between what may happen and what will remain science fiction—just as we don’t know whether to take seriously Freeman Dyson’s vision of biohacking by children. There are widely divergent views. Some experts, for instance Stuart Russell at Berkeley, and Demis Hassabis of DeepMind, think that the AI field, like synthetic biotech, already needs guidelines for ‘responsible innovation’. Moreover, the fact that AlphaGo achieved a goal that its creators thought would have taken several more years to reach has rendered DeepMind’s staff even more bullish about the speed of advancement. But others, like the roboticist Rodney Brooks (creator of the Baxter robot and the Roomba vacuum cleaner) think these concerns are too far from realisation to be worth worrying about—they remain less anxious about artificial intelligence than about real stupidity. Companies like Google, working closely with academia and government, lead the research into AI. These sectors now speak with one voice in highlighting the need to promote ‘robust and beneficial’ AI, but tensions may emerge when AI moves from the research and development phase to being a potentially massive money-spinner for global companies.
But does it matter if AI systems are having conscious thoughts in the sense that humans do? In the view of the computer science pioneer Edsger Dijkstra, it’s a nonquestion: ‘Whether machines can think is about as relevant as the question of whether submarines can swim’. Both a whale and a submarine make forward progress through the water, but they do it in fundamentally different ways. But to many it matters deeply whether intelligent machines are self-aware. In a scenario (see section 3.5) where future evolution is dominated by entities that are electronic, rather than having the ‘wet’ hardware we have in our skulls, it would seem depressing if we’d been surpassed in competence by ‘zombies’ who couldn’t appreciate the wonders of the universe they were in and couldn’t ‘sense’ the outside world as humans can. Be that as it may, society will be transformed by autonomous robots, even though the jury’s out on whether they’ll possess what we’d call real understanding or whether they’ll be ‘idiot savants’—with competence without comprehension.
A sufficiently versatile superintelligent robot could be the last invention that humans need to make. Once machines surpass human intelligence, they could design and assemble a new generation of even more intelligent machines. Some of the ‘staples’ of speculative science that flummox physicists today—time travel, space warps, and the ultracomplex—may be harnessed by the new machines, transforming the world physically. Ray Kurzweil (mentioned in section 2.1in connection with cryonics) argues that this could lead to a runaway intelligence explosion: the ‘singularity’. [13]
Few people doubt that machines will one day surpass most distinctively human capabilities; the disagreements are about the rate of travel, not the direction. If the AI enthusiasts are vindicated, it may take just decades before flesh-and-blood humans are transcended—or it may take centuries. But, compared to the aeons of evolutionary time that led to humanity’s emergence, even that is a mere blink of the eye. This is not a fatalistic projection. It is cause for optimism. The civilisation that supplants us could accomplish unimaginable advances—feats, perhaps, that we cannot even understand. I’ll scan horizons beyond the Earth in chapter 3.
Читать дальше