If you roll your eyes when people talk of gun-toting Terminator -style robots taking over, then you’re spot-on: this is a really unrealistic and silly scenario. These Hollywood robots aren’t that much smarter than us, and they don’t even succeed. In my opinion, the danger with the Terminator story isn’t that it will happen, but that it distracts from the real risks and opportunities presented by AI. To actually get from today to AGI-powered world takeover requires three logical steps:
• Step 1: Build human-level AGI.
• Step 2: Use this AGI to create superintelligence.
• Step 3: Use or unleash this superintelligence to take over the world.
In the last chapter, we saw that it’s hard to dismiss step 1 as forever impossible. We also saw that if step 1 gets completed, it becomes hard to dismiss step 2 as hopeless, since the resulting AGI would be capable enough to recursively design ever-better AGI that’s ultimately limited only by the laws of physics—which appear to allow intelligence far beyond human levels. Finally, since we humans have managed to dominate Earth’s other life forms by outsmarting them, it’s plausible that we could be similarly outsmarted and dominated by superintelligence.
These plausibility arguments are frustratingly vague and unspecific, however, and the devil is in the details. So can AI actually cause world takeover? To explore this question, let’s forget about silly Terminators and instead look at some detailed scenarios of what might actually happen. Afterward, we’ll dissect and poke holes in these plotlines, so please read them with a grain of salt—what they mainly show is that we’re pretty clueless about what will and won’t happen, and that the range of possibilities is extreme. Our first scenarios are at the most rapid and dramatic end of the spectrum. These are in my opinion some of the most valuable to explore in detail—not because they’re necessarily the most likely, but because if we can’t convince ourselves that they’re extremely unlikely, then we need to understand them well enough that we can take precautions before it’s too late, to prevent them from leading to bad outcomes.
The prelude of this book is a scenario where humans use superintelligence to take over the world. If you haven’t yet read it, please go back and do so now. Even if you’ve already read it, please consider skimming it again now, to have it fresh in memory before we critique and alter it.
* * *
We’ll soon explore serious vulnerabilities in the Omegas’ plan, but assuming for a moment that it would work, how do you feel about it? Would you like to see or prevent this? It’s an excellent topic for after-dinner conversation! What happens once the Omegas have consolidated their control of the world? That depends on what their goal is, which I honestly don’t know. If you were in charge, what sort of future would you want to create? We’ll explore a range of options in chapter 5.
Totalitarianism
Now suppose that the CEO controlling the Omegas had long-term goals similar to those of Adolf Hitler or Joseph Stalin. For all we know, this might actually have been the case, and he simply kept these goals to himself until he had sufficient power to implement them. Even if the CEO's original goals were noble, Lord Acton cautioned in 1887 that “power tends to corrupt and absolute power corrupts absolutely.” For example, he could easily use Prometheus to create the perfect surveillance state. Whereas the government snooping revealed by Edward Snowden aspired to what’s known as “full take”—recording all electronic communications for possible later analysis—Prometheus could enhance this to understanding all electronic communications. By reading all emails and texts ever sent, listening to all phone calls, watching all surveillance videos and traffic cameras, analyzing all credit card transactions and studying all online behavior, Prometheus would have remarkable insight into what the people of Earth were thinking and doing. By analyzing cell tower data, it would know where most of them were at all times. All this assumes only today’s data collection technology, but Prometheus could easily invent popular gadgets and wearable tech that would virtually eliminate the privacy of the user, recording and uploading everything they hear and see and their responses to it.
With superhuman technology, the step from the perfect surveillance state to the perfect police state would be minute. For example, with the excuse of fighting crime and terrorism and rescuing people suffering medical emergencies, everybody could be required to wear a “security bracelet” that combined the functionality of an Apple Watch with continuous uploading of position, health status and conversations overheard. Unauthorized attempts to remove or disable it would cause it to inject a lethal toxin into the forearm. Infractions deemed as less serious by the government would be punished via electric shocks or injection of chemicals causing paralysis or pain, thereby obviating much of the need for a police force. For example, if Prometheus detects that one human is assaulting another (by noting that they’re in the same location and one is heard crying for help while their bracelet accelerometers detect the telltale motions of combat), it could promptly disable the attacker with crippling pain, followed by unconsciousness until help arrived.
Whereas a human police force may refuse to carry out certain draconian directives (for example, killing all members of a certain demographic group), such an automated system would have no qualms about implementing the whims of the human(s) in charge. Once such a totalitarian state forms, it would be virtually impossible for people to overthrow it.
These totalitarian scenarios could follow where the Omega scenario left off. However, if the CEO of the Omegas weren’t so fussy about getting other people’s approval and winning elections, he could have taken a faster and more direct route to power: using Prometheus to create unheard-of military technology capable of killing his opponents with weapons that they didn’t even understand. The possibilities are virtually endless. For example, he might release a customized lethal pathogen with an incubation period long enough that most people got infected before they even knew of its existence or could take precautions. He could then inform everybody that the only cure was starting to wear the security bracelet, which would release an antidote transdermally. If he weren’t so risk-averse regarding the breakout possibility, he could also have had Prometheus design robots to keep the world population in check. Mosquito-like microbots could help spread the pathogen. People who avoided infection or had natural immunity could be shot in the eyeballs by swarms of those bumblebee-sized autonomous drones from chapter 3 that attack anyone without a security bracelet. Actual scenarios would probably be more frightening, because Prometheus could invent more effective weapons than we humans can think of.
Another possible twist on the Omega scenario is that, without advance warning, heavily armed federal agents swarm their corporate headquarters and arrest the Omegas for threatening national security, seize their technology and deploy it for government use. It would be challenging to keep such a large project unnoticed by state surveillance even today, and AI progress may well make it even more difficult to stay under the government’s radar in the future. Moreover, although they claim to be federal agents, this team donning balaclavas and flak jackets may in fact work for a foreign government or competitor pursuing the technology for its own purposes. So no matter how noble the CEO’s intentions were, the final decision about how Prometheus is used may not be his to make.
Читать дальше