In the same way, I suspect that there are simpler ways to build human-level thinking machines than the solution evolution came up with, and even if we one day manage to replicate or upload brains, we’ll end up discovering one of those simpler solutions first. It will probably draw more than the twelve watts of power that your brain uses, but its engineers won’t be as obsessed about energy efficiency as evolution was—and soon enough, they’ll be able to use their intelligent machines to design more energy-efficient ones.
What Will Actually Happen?
The short answer is obviously that we have no idea what will happen if humanity succeeds in building human-level AGI. For this reason, we’ve spent this chapter exploring a broad spectrum of scenarios. I’ve attempted to be quite inclusive, spanning the full range of speculations I’ve seen or heard discussed by AI researchers and technologists: fast takeoff/slow takeoff/no takeoff, humans/machines/cyborgs in control, one/many centers of power, etc. Some people have told me that they’re sure that this or that won’t happen. However, I think it’s wise to be humble at this stage and acknowledge how little we know, because for each scenario discussed above, I know at least one well-respected AI researcher who views it as a real possibility.
As time passes and we reach certain forks in the road, we’ll start to answer key questions and narrow down the options. The first big question is “Will we ever create human-level AGI?” The premise of this chapter is that we will, but there are AI experts who think it will never happen, at least not for hundreds of years. Time will tell! As I mentioned earlier, about half of the AI experts at our Puerto Rico conference guessed that it would happen by 2055. At a follow-up conference we organized two years later, this had dropped to 2047.
Before any human-level AGI is created, we may start getting strong indications about whether this milestone is likely to be first met by computer engineering, mind uploading or some unforeseen novel approach. If the computer engineering approach to AI that currently dominates the field fails to deliver AGI for centuries, this will increase the chance that uploading will get there first, as happened (rather unrealistically) in the movie Transcendence.
If human-level AGI gets more imminent, we’ll be able to make more educated guesses about the answer to the next key question: “Will there be a fast takeoff, a slow takeoff or no takeoff?” As we saw above, a fast takeoff makes world takeover easier, while a slow one makes an outcome with many competing players more likely. Nick Bostrom dissects this question of takeoff speed in an analysis of what he calls optimization power and recalcitrance , which are basically the amount of quality effort to make AI smarter and the difficulty of making progress, respectively. The average rate of progress clearly increases if more optimization power is brought to bear on the task and decreases if more recalcitrance is encountered. He makes arguments for why the recalcitrance might either increase or decrease as the AGI reaches and transcends human level, so keeping both options on the table is a safe bet. Turning to the optimization power, however, it’s overwhelmingly likely that it will grow rapidly as the AGI transcends human level, for the reasons we saw in the Omega scenario: the main input to further optimization comes not from people but from the machine itself, so the more capable it gets, the faster it improves (if recalcitrance stays fairly constant).
For any process whose power grows at a rate proportional to its current power, the result is that its power keeps doubling at regular intervals. We call such growth exponential , and we call such processes explosions . If baby-making power grows in proportion to the size of the population, we can get a population explosion. If the creation of neutrons capable of fissioning plutonium grows in proportion to the number of such neutrons, we can get a nuclear explosion. If machine intelligence grows at a rate proportional to the current power, we can get an intelligence explosion. All such explosions are characterized by the time they take to double their power. If that time is hours or days for an intelligence explosion, as in the Omega scenario, we have a fast takeoff on our hands.
This explosion timescale depends crucially on whether improving the AI requires merely new software (which can be created in a matter of seconds, minutes or hours) or new hardware (which might require months or years). In the Omega scenario, there was a significant hardware overhang, in Bostrom’s terminology: the Omegas had compensated for the low quality of their original software by vast amounts of hardware, which meant that Prometheus could perform a large number of quality doublings by improving its software alone. There was also a major content overhang in the form of much of the internet’s data; Prometheus 1.0 was still not smart enough to make use of most of it, but once Prometheus’ intelligence grew, the data it needed for further learning was already available without delay.
The hardware and electricity costs of running the AI are crucial as well, since we won’t get an intelligence explosion until the cost of doing human-level work drops below human-level hourly wages. Suppose, for example, that the first human-level AGI can be efficiently run on the Amazon cloud at a cost of $1 million per hour of human-level work produced. This AI would have great novelty value and undoubtedly make headlines, but it wouldn’t undergo recursive self-improvement, because it would be much cheaper to keep using humans to improve it. Suppose that these humans gradually manage to cut the cost to $100,000/hour, $10,000/hour, $1,000/hour, $100/hour, $10/hour and finally $1/hour. By the time the cost of using the computer to reprogram itself finally drops far below the cost of paying human programmers to do the same, the humans can be laid off and the optimization power greatly expanded by buying cloud-computing time. This produces further cost cuts, allowing still more optimization power, and the intelligence explosion has begun.
This leaves us with our final key question: “Who or what will control the intelligence explosion and its aftermath, and what are their/its goals?” We’ll explore possible goals and outcomes in the next chapter and more deeply in chapter 7. To sort out the control issue, we need to know both how well an AI can be controlled, and how much an AI can control.
In terms of what will ultimately happen, you’ll currently find serious thinkers all over the map: some contend that the default outcome is doom, while others insist that an awesome outcome is virtually guaranteed. To me, however, this query is a trick question: it’s a mistake to passively ask “what will happen,” as if it were somehow predestined! If a technologically superior alien civilization arrived tomorrow, it would indeed be appropriate to wonder “what will happen” as their spaceships approached, because their power would probably be so far beyond ours that we’d have no influence over the outcome. If a technologically superior AI-fueled civilization arrives because we built it, on the other hand, we humans have great influence over the outcome—influence that we exerted when we created the AI. So we should instead ask: “What should happen? What future do we want?” In the next chapter, we’ll explore a wide spectrum of possible aftermaths of the current race toward AGI, and I’m quite curious how you’d rank them from best to worst. Only once we’ve thought hard about what sort of future we want will we be able to begin steering a course toward a desirable future. If we don’t know what we want, we’re unlikely to get it.
Читать дальше