Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence

Здесь есть возможность читать онлайн «Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence» весь текст электронной книги совершенно бесплатно (целиком полную версию без сокращений). В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Год выпуска: 2017, ISBN: 2017, Издательство: Knopf Doubleday Publishing Group, Жанр: Прочая научная литература, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Life 3.0: Being Human in the Age of Artificial Intelligence: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Life 3.0: Being Human in the Age of Artificial Intelligence»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology--and there's nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who's helped mainstream research on how to keep AI beneficial.
How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today's kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle?
What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn't shy away from the full range of viewpoints or from the most controversial issues -- from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.

Life 3.0: Being Human in the Age of Artificial Intelligence — читать онлайн бесплатно полную книгу (весь текст) целиком

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Life 3.0: Being Human in the Age of Artificial Intelligence», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

The countdown to the announcement finally reached zero. The superintelligence panelists that I’d moderated still sat there next to me onstage in their chairs: Eliezer Yudkowsky, Elon Musk, Nick Bostrom, Richard Mallah, Murray Shanahan, Bart Selman, Shane Legg and Vernor Vinge. People gradually stopped applauding, but the panelists remained seated, because I’d told them to stay without explaining why. Meia later told me that her pulse reached the stratosphere around now, and that she clutched Viktoriya Krakovna’s calming hand under the table. I smiled, knowing that this was the moment we’d worked, hoped and waited for.

I was very happy that there was such consensus at the meeting that more research was needed for keeping AI beneficial, I said, and that there were so many concrete research directions we could work on right away. But there had been talk of serious risks in this session, I added, so it would be nice to raise our spirits and get into an upbeat mood before heading out to the bar and the conference banquet that had been set up outside. “And I’m therefore giving the microphone to…Elon Musk!” I felt that history was in the making as Elon took the mic and announced that he would donate a large amount of money to AI-safety research. Unsurprisingly, he brought down the house. As planned, he didn’t mention how much, but I knew that it was a cool $10 million, as we’d agreed.

Meia and I went to visit our parents in Sweden and Romania after the conference, and with bated breath, we watched the live-streamed rocket launch with my dad in Stockholm. The landing attempt unfortunately ended with what Elon euphemistically calls an RUD, “rapid unscheduled disassembly,” and pulling off a successful ocean landing took his team another fifteen months.3 However, all the satellites were successfully launched into orbit, as was our grants program via a tweet by Elon to his millions of followers.4

Mainstreaming AI Safety

A key goal of the Puerto Rico conference had been to mainstream AI-safety research, and it was exhilarating to see this unfold in multiple steps. First there was the meeting itself, where many researchers started feeling comfortable engaging with the topic once they realized that they were part of a growing community of peers. I was deeply touched by encouragement from many participants. For example, Cornell University AI professor Bart Selman emailed me saying, “I’ve honestly never seen a better organized or more exciting and intellectually stimulating scientific meeting.”

The next mainstreaming step began on January 11 when Elon tweeted “World’s top artificial intelligence developers sign open letter calling for AI-safety research,”5 linking to a sign-up page that soon racked up over eight thousand signatures, including many of the world’s most prominent AI builders. It suddenly became harder to claim that people concerned about AI safety didn’t know what they were talking about, because this now implied that a who’s who of leading AI researchers didn’t know what they were talking about. The open letter was reported by media around the world in a way that made us grateful that we’d barred journalists from our conference. Although the most alarmist word in the letter was “pitfalls,” it nonetheless triggered headlines such as “Elon Musk and Stephen Hawking Sign Open Letter in Hopes of Preventing Robot Uprising,” illustrated by murderous terminators. Of the hundreds of articles we spotted, our favorite was one mocking the others, writing that “a headline that conjures visions of skeletal androids stomping human skulls underfoot turns complex, transformative technology into a carnival sideshow.”6 Fortunately, there were also many sober news articles, and they gave us another challenge: keeping up with the torrent of new signatures, which needed to be manually verified to protect our credibility and weed out pranks such as “HAL 9000,” “Terminator,” “Sarah Jeanette Connor” and “Skynet.” For this and our future open letters, Viktoriya Krakovna and János Krámar helped organize a volunteer brigade of checkers that included Jesse Galef, Eric Gastfriend and Revathi Vinoth Kumar working in shifts, so that when Revathi went to sleep in India, she passed the baton to Eric in Boston, and so on.

The third mainstreaming step began four days later, when Elon tweeted a link to our announcement that he was donating $10 million to AI-safety research.7 A week later, we launched an online portal where researchers from around the world could apply and compete for this funding. We were able to whip the application system together so quickly only because Anthony and I had spent the previous decade running similar competitions for physics grants. The Open Philanthropy Project, a California-based charity focused on high-impact giving, generously agreed to top up Elon’s gift to allow us to give more grants. We weren’t sure how many applicants we’d get, since the topic was novel and the deadline was short. The response blew us away, with about three hundred teams from around the world asking for about $100 million. A panel of AI professors and other researchers carefully reviewed the proposals and selected thirty-seven winning teams, who were funded for up to three years. When we announced the list of winners, it marked the first time that the media response to our activities was fairly nuanced and free of killer-robot pictures. It was finally sinking in that AI safety wasn’t empty talk: there was actual useful work to be done, and lots of great research teams were rolling up their sleeves to join the effort.

The fourth mainstreaming step happened organically over the next two years, with scores of technical publications and dozens of workshops on AI safety around the world, typically as parts of mainstream AI conferences. Persistent people had tried for many years to engage the AI community in safety research, with limited success, but now things really took off. Many of these publications were funded by our grants program and we at FLI did our best to help organize and fund as many of these workshops as we could, but a growing fraction of them were enabled by AI researchers investing their own time and resources. As a result, ever more researchers learned about safety research from their own colleagues, discovering that aside from being useful, it could also be fun, involving interesting mathematical and computational problems to puzzle over.

Complicated equations aren’t everyone’s idea of fun, of course. Two years after our Puerto Rico conference, we preceded our Asilomar conference with a technical workshop where our FLI grant winners could showcase their research, and watched slide after slide with mathematical symbols on the big screen. Moshe Vardi, an AI professor at Rice University, joked that he knew we’d succeeded in establishing an AI-safety research field once the meetings got boring.

This dramatic growth of AI-safety work wasn’t limited to academia. Amazon, DeepMind, Facebook, Google, IBM and Microsoft launched an industry partnership for beneficial AI.8 Major new AI-safety donations enabled expanded research at our largest nonprofit sister organizations: the Machine Intelligence Research Institute in Berkeley, the Future of Humanity Institute in Oxford and the Centre for the Study of Existential Risk in Cambridge (UK). Further donations of $10 million or more kick-started additional beneficial-AI efforts: the Leverhulme Centre for the Future of Intelligence in Cambridge, the K&L Gates Endowment for Ethics and Computational Technologies in Pittsburgh and the Ethics and Governance of Artificial Intelligence Fund in Miami. Last but not least, with a billion-dollar commitment, Elon Musk partnered with other entrepreneurs to launch OpenAI, a nonprofit company in San Francisco pursuing beneficial AI. AI-safety research was here to stay.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence»

Представляем Вашему вниманию похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence»

Обсуждение, отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x