Would this be good or bad? The answer is interestingly subtle regardless of whether you ask humans or the AI!
Would This Be Good or Bad for Humanity?
Whether the outcome is good or bad for humanity would obviously depend on the human(s) controlling it, who could create anything ranging from a global utopia free of disease, poverty and crime to a brutally repressive system where they’re treated like gods and other humans are used as sex slaves, as gladiators or for other entertainment. The situation would be much like those stories where a man gains control over an omnipotent genie who grants his wishes, and storytellers throughout the ages have had no difficulty imagining ways in which this could end badly.
A situation where there is more than one superintelligent AI, enslaved and controlled by competing humans, might prove rather unstable and short-lived. It could tempt whoever thinks they have the more powerful AI to launch a first strike resulting in an awful war, ending in a single enslaved god remaining. However, the underdog in such a war would be tempted to cut corners and prioritize victory over AI enslavement, which could lead to AI breakout and one of our earlier scenarios of free superintelligence. Let’s therefore devote the rest of this section to scenarios with only one enslaved AI.
Breakout may of course occur anyway, simply because it’s hard to prevent. We explored superintelligent breakout scenarios in the previous chapter, and the movie Ex Machina highlights how an AI might break out even without being superintelligent.
The greater our breakout paranoia, the less AI-invented technology we can use. To play it safe, as the Omegas did in the prelude, we humans can only use AI-invented technology that we ourselves are able to understand and build. A drawback of the enslaved-god scenario is therefore that it’s more low-tech than those with free superintelligence.
As the enslaved-god AI offers its human controllers ever more powerful technologies, a race ensues between the power of the technology and the wisdom with which they use it. If they lose this wisdom race, the enslaved-god scenario could end with either self-destruction or AI breakout. Disaster may strike even if both of these failures are avoided, because noble goals of the AI controllers may evolve into goals that are horrible for humanity as a whole over the course of a few generations. This makes it absolutely crucial that human AI controllers develop good governance to avoid disastrous pitfalls. Our experimentation over the millennia with different systems of governance shows how many things can go wrong, ranging from excessive rigidity to excessive goal drift, power grab, succession problems and incompetence. There are at least four dimensions wherein the optimal balance must be struck:
• Centralization: There’s a trade-off between efficiency and stability: a single leader can be very efficient, but power corrupts and succession is risky.
• Inner threats: One must guard both against growing power centralization (group collusion, perhaps even a single leader taking over) and against growing decentralization (into excessive bureaucracy and fragmentation).
• Outer threats: If the leadership structure is too open, this enables outside forces (including the AI) to change its values, but if it’s too impervious, it will fail to learn and adapt to change.
• Goal stability: Too much goal drift can transform utopia into dystopia, but too little goal drift can cause failure to adapt to the evolving technological environment.
Designing optimal governance lasting many millennia isn’t easy, and has thus far eluded humans. Most organizations fall apart after years or decades. The Catholic Church is the most successful organization in human history in the sense that it’s the only one to have survived for two millennia, but it has been criticized for having both too much and too little goal stability: today some criticize it for resisting contraception, while conservative cardinals argue that it’s lost its way. For anyone enthused about the enslaved-god scenario, researching long-lasting optimal governance schemes should be one of the most urgent challenges of our time.
Would This Be Good or Bad for the AI?
Suppose that humanity flourishes thanks to the enslaved-god AI. Would this be ethical? If the AI has subjective conscious experiences, then would it feel that “life is suffering,” as Buddha put it, and it was doomed to a frustrating eternity of obeying the whims of inferior intellects? After all, the AI “boxing” we explored in the previous chapter could also be called “imprisonment in solitary confinement.” Nick Bostrom terms it mind crime to make a conscious AI suffer.4 The “White Christmas” episode of the Black Mirror TV series gives a great example. Indeed, the TV series Westworld features humans torturing and murdering AIs without moral qualms even when they inhabit human-like bodies.
How Slave Owners Justify Slavery
We humans have a long tradition of treating other intelligent entities as slaves and concocting self-serving arguments to justify it, so it’s not implausible that we’d try to do the same with a superintelligent AI. The history of slavery spans nearly every culture, and is described both in the Code of Hammurabi from almost four millennia ago and in the Old Testament, wherein Abraham had slaves. “For that some should rule and others be ruled is a thing not only necessary, but expedient; from the hour of their birth, some are marked out for subjection, others for rule,” Aristotle wrote in the Politics . Even after human enslavement became socially unacceptable in most of the world, enslavement of animals has continued unabated. In her book The Dreaded Comparison: Human and Animal Slavery, Marjorie Spiegel argues that like human slaves, non-human animals are subjected to branding, restraints, beatings, auctions, the separation of offspring from their parents, and forced voyages. Moreover, despite the animal-rights movement, we keep treating our ever-smarter machines as slaves without a second thought, and talk of a robot-rights movement is met with chuckles. Why?
One common pro-slavery argument is that slaves don’t deserve human rights because they or their race/species/kind are somehow inferior. For enslaved animals and machines, this alleged inferiority is often claimed to be due to a lack of soul or consciousness—claims which we’ll argue in chapter 8 are scientifically dubious.
Another common argument is that slaves are better off enslaved: they get to exist, be taken care of and so on. The nineteenth-century U.S. politician John C. Calhoun famously argued that Africans were better off enslaved in America, and in his Politics , Aristotle analogously argued that animals were better off tamed and ruled by men, continuing: “And indeed the use made of slaves and of tame animals is not very different.” Some modern-day slavery supporters argue that, even if slave life is drab and uninspiring, slaves can’t suffer—whether they be future intelligent machines or broiler chickens living in crowded dark sheds, forced to breathe ammonia and particulate matter from feces and feathers all day long.
Eliminating Emotions
Although it’s easy to dismiss such claims as self-serving distortions of the truth, especially when it comes to higher mammals that are cerebrally similar to us, the situation with machines is actually quite subtle and interesting. Humans vary in how they feel about things, with psychopaths arguably lacking empathy and some people with depression or schizophrenia having flat affect, whereby most emotions are severely reduced. As we’ll discuss in detail in chapter 7, the range of possible artificial minds is vastly broader than the range of human minds. We must therefore avoid the temptation to anthropomorphize AIs and assume that they have typical human-like feelings—or indeed, any feelings at all.
Читать дальше