Although Larry seemed outnumbered that warm summer night by the pool, the digital utopianism that he so eloquently championed has many prominent supporters. Roboticist and futurist Hans Moravec inspired a whole generation of digital utopians with his classic 1988 book Mind Children, a tradition continued and refined by inventor Ray Kurzweil. Richard Sutton, one of the pioneers of the AI subfield known as reinforcement learning, gave a passionate defense of digital utopianism at our Puerto Rico conference that I’ll tell you about shortly.
Techno-skeptics
Another prominent group of thinkers aren’t worried about AI either, but for a completely different reason: they think that building superhuman AGI is so hard that it won’t happen for hundreds of years, and therefore view it as silly to worry about it now. I think of this as the techno-skeptic position, eloquently articulated by Andrew Ng: “Fearing a rise of killer robots is like worrying about overpopulation on Mars.” Andrew was the chief scientist at Baidu, China’s Google, and he recently repeated this argument when I spoke with him at a conference in Boston. He also told me that he felt that worrying about AI risk was a potentially harmful distraction that could slow the progress of AI. Similar sentiments have been articulated by other techno-skeptics such as Rodney Brooks, the former MIT professor behind the Roomba robotic vacuum cleaner and the Baxter industrial robot. I find it interesting that although the digital utopians and the techno-skeptics agree that we shouldn’t worry about AI, they agree on little else. Most of the utopians think human-level AGI might happen within the next twenty to a hundred years, which the techno-skeptics dismiss as uninformed pie-in-the-sky dreaming, often deriding the prophesied singularity as “the rapture of the geeks.” When I met Rodney Brooks at a birthday party in December 2014, he told me that he was 100% sure it wouldn’t happen in my lifetime. “Are you sure you don’t mean 99%?,” I asked in a follow-up email, to which he replied, “No wimpy 99%. 100%. Just isn’t going to happen.”
The Beneficial-AI Movement
When I first met Stuart Russell in a Paris café in June 2014, he struck me as the quintessential British gentleman. Eloquent, thoughtful and soft-spoken, but with an adventurous glint in his eyes, he seemed to me a modern incarnation of Phileas Fogg, my childhood hero from Jules Verne’s classic 1873 novel, Around the World in 80 Days. Although he was one of the most famous AI researchers alive, having co-authored the standard textbook on the subject, his modesty and warmth soon put me at ease. He explained to me how progress in AI had persuaded him that human-level AGI this century was a real possibility and, although he was hopeful, a good outcome wasn’t guaranteed. There were crucial questions that we needed to answer first, and they were so hard that we should start researching them now, so that we’d have the answers ready by the time we needed them.
Today, Stuart’s views are rather mainstream, and many groups around the world are pursuing the sort of AI-safety research that he advocates. But this wasn’t always the case. An article in The Washington Post referred to 2015 as the year that AI-safety research went mainstream. Before that, talk of AI risks was often misunderstood by mainstream AI researchers and dismissed as Luddite scaremongering aimed at impeding AI progress. As we’ll explore in chapter 5, concerns similar to Stuart’s were first articulated over half a century ago by computer pioneer Alan Turing and mathematician Irving J. Good, who worked with Turing to crack German codes during World War II. In the past decade, research on such topics was mainly carried out by a handful of independent thinkers who weren’t professional AI researchers, for example Eliezer Yudkowsky, Michael Vassar and Nick Bostrom. Their work had little effect on most mainstream AI researchers, who tended to focus on their day-to-day tasks of making AI systems more intelligent rather than on contemplating the long-term consequences of success. Of the AI researchers I knew who did harbor some concern, many hesitated to voice it out of fear of being perceived as alarmist technophobes.
I felt that this polarized situation needed to change, so that the full AI community could join and influence the conversation about how to build beneficial AI. Fortunately, I wasn’t alone. In the spring of 2014, I’d founded a nonprofit organization called the Future of Life Institute (FLI; http://futureoflife.org) together with my wife, Meia, my physicist friend Anthony Aguirre, Harvard grad student Viktoriya Krakovna and Skype founder Jaan Tallinn. Our goal was simple: to help ensure that the future of life existed and would be as awesome as possible. Specifically, we felt that technology was giving life the power either to flourish like never before or to self-destruct, and we preferred the former.
Our first meeting was a brainstorming session at our house on March 15, 2014, with about thirty students, professors and other thinkers from the Boston area. There was broad consensus that although we should pay attention to biotech, nuclear weapons and climate change, our first major goal should be to help make AI-safety research mainstream. My MIT physics colleague Frank Wilczek, who won a Nobel Prize for helping figure out how quarks work, suggested that we start by writing an op-ed to draw attention to the issue and make it harder to ignore. I reached out to Stuart Russell (whom I hadn’t yet met) and to my physics colleague Stephen Hawking, both of whom agreed to join me and Frank as co-authors. Many edits later, our op-ed was rejected by The New York Times and many other U.S. newspapers, so we posted it on my Huffington Post blog account. To my delight, Arianna Huffington herself emailed and said, “thrilled to have it! We’ll post at #1!,” and this placement at the top of the front page triggered a wave of media coverage of AI safety that lasted for the rest of the year, with Elon Musk, Bill Gates and other tech leaders chiming in. Nick Bostrom’s book Superintelligence came out that fall and further fueled the growing public debate.
The next goal of our FLI beneficial-AI campaign was to bring the world’s leading AI researchers to a conference where misunderstandings could be cleared up, consensus could be forged, and constructive plans could be made. We knew that it would be difficult to persuade such an illustrious crowd to come to a conference organized by outsiders they didn’t know, especially given the controversial topic, so we tried as hard as we could: we banned media from attending, we located it in a beach resort in January (in Puerto Rico), we made it free (thanks to the generosity of Jaan Tallinn), and we gave it the most non-alarmist title we could come up with: “The Future of AI: Opportunities and Challenges.” Most importantly, we teamed up with Stuart Russell, thanks to whom we were able to grow the organizing committee to include a group of AI leaders from both academia and industry—including Demis Hassabis from Google’s DeepMind, who went on to show that AI can beat humans even at the game of Go. The more I got to know Demis, the more I realized that he had ambition not only to make AI powerful, but also to make it beneficial.
The result was a remarkable meeting of minds (figure 1.3). The AI researchers were joined by top economists, legal scholars, tech leaders (including Elon Musk) and other thinkers (including Vernor Vinge, who coined the term “singularity,” which is the focus of chapter 4). The outcome surpassed even our most optimistic expectations. Perhaps it was a combination of the sunshine and the wine, or perhaps it was just that the time was right: despite the controversial topic, a remarkable consensus emerged, which we codified in an open letter2 that ended up getting signed by over eight thousand people, including a veritable who’s who in AI. The gist of the letter was that the goal of AI should be redefined: the goal should be to create not undirected intelligence, but beneficial intelligence. The letter also mentioned a detailed list of research topics that the conference participants agreed would further this goal. The beneficial-AI movement had started going mainstream. We’ll follow its subsequent progress later in the book.
Читать дальше