Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence

Здесь есть возможность читать онлайн «Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence» весь текст электронной книги совершенно бесплатно (целиком полную версию без сокращений). В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Год выпуска: 2017, ISBN: 2017, Издательство: Knopf Doubleday Publishing Group, Жанр: Прочая научная литература, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Life 3.0: Being Human in the Age of Artificial Intelligence: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Life 3.0: Being Human in the Age of Artificial Intelligence»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology--and there's nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who's helped mainstream research on how to keep AI beneficial.
How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today's kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle?
What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn't shy away from the full range of viewpoints or from the most controversial issues -- from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.

Life 3.0: Being Human in the Age of Artificial Intelligence — читать онлайн бесплатно полную книгу (весь текст) целиком

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Life 3.0: Being Human in the Age of Artificial Intelligence», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

In this example, there was a human in the loop making the final decision, who under time pressure placed too much trust in what the automated system told him. So far, according to defense officials around the world, all deployed weapons systems have a human in the loop, with the exception of low-tech booby traps such as land mines. But development is now under way of truly autonomous weapons that select and attack targets entirely on their own. It’s militarily tempting to take all humans out of the loop to gain speed: in a dogfight between a fully autonomous drone that can respond instantly and a drone reacting more sluggishly because it’s remote-controlled by a human halfway around the world, which one do you think would win?

However, there have been close calls where we were extremely lucky that there was a human in the loop. On October 27, 1962, during the Cuban Missile Crisis, eleven U.S. Navy destroyers and the aircraft carrier USS Randolph had cornered the Soviet submarine B-59 near Cuba, in international waters outside the U.S. “quarantine” area. What they didn’t know was that the temperature onboard had risen past 45°C (113°F) because the submarine’s batteries were running out and the air-conditioning had stopped. On the verge of carbon dioxide poisoning, many crew members had fainted. The crew had had no contact with Moscow for days and didn’t know whether World War III had already begun. Then the Americans started dropping small depth charges, which they had, unbeknownst to the crew, told Moscow were merely meant to force the sub to surface and leave. “We thought—that’s it—the end,” crew member V. P. Orlov recalled. “It felt like you were sitting in a metal barrel, which somebody is constantly blasting with a sledgehammer.” What the Americans also didn’t know was that the B-59 crew had a nuclear torpedo that they were authorized to launch without clearing it with Moscow. Indeed, Captain Savitski decided to launch the nuclear torpedo. Valentin Grigorievich, the torpedo officer, exclaimed: “We will die, but we will sink them all—we will not disgrace our navy!” Fortunately, the decision to launch had to be authorized by three officers on board, and one of them, Vasili Arkhipov, said no. It’s sobering that very few have heard of Arkhipov, although his decision may have averted World War III and been the single most valuable contribution to humanity in modern history.38 It’s also sobering to contemplate what might have happened had B-59 been an autonomous AI-controlled submarine with no humans in the loop.

Two decades later, on September 9, 1983, tensions were again high between the superpowers: the Soviet Union had recently been called an “evil empire” by U.S. president Ronald Reagan, and just the previous week, it had shot down a Korean Airlines passenger plane that strayed into its airspace, killing 269 people—including a U.S. congressman. Now an automated Soviet early-warning system reported that the United States had launched five land-based nuclear missiles at the Soviet Union, leaving Officer Stanislav Petrov merely minutes to decide whether this was a false alarm. The satellite was found to be operating properly, so following protocol would have led him to report an incoming nuclear attack. Instead, he trusted his gut instinct, figuring that the United States was unlikely to attack with only five missiles, and reported to his commanders that it was a false alarm without knowing this to be true. It later became clear that a satellite had mistaken the Sun’s reflections off cloud tops for flames from rocket engines.39 I wonder what would have happened if Petrov had been replaced by an AI system that properly followed proper protocol.

The Next Arms Race?

As you’ve undoubtedly guessed by now, I personally have serious concerns about autonomous weapons systems. But I haven’t even begun to tell you about my main worry: the endpoint of an arms race in AI weapons. In July 2015, I expressed this worry in the following open letter together with Stuart Russell, with helpful feedback from my colleagues at the Future of Life Institute:40

AUTONOMOUS WEAPONS:

An Open Letter from AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is practically if not legally feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they’ll become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

To make it harder to dismiss our concerns as coming only from pacifist tree-huggers, I wanted to get our letter signed by as many hardcore AI researchers and roboticists as possible. The International Campaign for Robotic Arms Control had previously amassed hundreds of signatories who called for a ban on killer robots, and I suspected that we could do even better. I knew that professional organizations would be reluctant to share their massive member email lists for a purpose that could be construed as political, so I scraped together lists of researchers’ names and institutions from online documents and advertised the task of finding their email addresses on MTurk—the Amazon Mechanical Turk crowdsourcing platform. Most researchers have their email addresses listed on their university websites, and twenty-four hours and $54 later, I was the proud owner of a mailing list of hundreds of AI researchers who’d been successful enough to be elected Fellows of the Association for the Advancement of Artificial Intelligence (AAAI). One of them was the British-Australian AI professor Toby Walsh, who kindly agreed to email everyone else on the list and help spearhead our campaign. MTurk workers around the world tirelessly produced additional mailing lists for Toby, and before long, over 3,000 AI researchers and robotics researchers had signed our open letter, including six past AAAI presidents and AI industry leaders from Google, Facebook, Microsoft and Tesla. An army of FLI volunteers worked tirelessly to validate the signatory lists, removing spoof entries such as Bill Clinton and Sarah Connor. Over 17,000 others signed too, including Stephen Hawking, and after Toby organized a press conference about this at the International Joint Conference of Artificial Intelligence, it became a major news story around the world.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence»

Представляем Вашему вниманию похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence»

Обсуждение, отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x