Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence

Здесь есть возможность читать онлайн «Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence» весь текст электронной книги совершенно бесплатно (целиком полную версию без сокращений). В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Год выпуска: 2017, ISBN: 2017, Издательство: Knopf Doubleday Publishing Group, Жанр: Прочая научная литература, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Life 3.0: Being Human in the Age of Artificial Intelligence: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Life 3.0: Being Human in the Age of Artificial Intelligence»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology--and there's nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who's helped mainstream research on how to keep AI beneficial.
How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today's kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle?
What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn't shy away from the full range of viewpoints or from the most controversial issues -- from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.

Life 3.0: Being Human in the Age of Artificial Intelligence — читать онлайн бесплатно полную книгу (весь текст) целиком

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Life 3.0: Being Human in the Age of Artificial Intelligence», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

How to best alter our laws to reflect AI progress is a fascinatingly controversial topic. One dispute reflects the tension between privacy versus freedom of information. Freedom fans argue that the less privacy we have, the more evidence the courts will have, and the fairer the judgments will be. For example, if the government taps into everyone’s electronic devices to record where they are and what they type, click, say and do, many crimes would be readily solved, and additional ones could be prevented. Privacy advocates counter that they don’t want an Orwellian surveillance state, and that even if they did, there’s a risk of it turning into a totalitarian dictatorship of epic proportions. Moreover, machine-learning techniques have gotten better at analyzing brain data from fMRI scanners to determine what a person is thinking about and, in particular, whether they’re telling the truth or lying.37 If AI-assisted brain scanning technology became commonplace in courtrooms, the currently tedious process of establishing the facts of a case could be dramatically simplified and expedited, enabling faster trials and fairer judgments. But privacy advocates might worry about whether such systems occasionally make mistakes and, more fundamentally, whether our minds should be off-limits to government snooping. Governments that don’t support freedom of thought could use such technology to criminalize the holding of certain beliefs and opinions. Where would you draw the line between justice and privacy, and between protecting society and protecting personal freedom? Wherever you draw it, will it gradually but inexorably move toward reduced privacy to compensate for the fact that evidence gets easier to fake? For example, once AI becomes able to generate fully realistic fake videos of you committing crimes, will you vote for a system where the government tracks everyone’s whereabouts at all times and can provide you with an ironclad alibi if needed?

Another captivating controversy is whether AI research should be regulated or, more generally, what incentives policymakers should give AI researchers to maximize the chances of a beneficial outcome. Some AI researchers have argued against all forms of regulation of AI development, claiming that they would needlessly delay urgently needed innovation (for example, lifesaving self-driving cars) and would drive cutting-edge AI research underground and/or to other countries with more permissive governments. At the Puerto Rico beneficial-AI conference mentioned in the first chapter, Elon Musk argued that what we need right now from governments isn’t oversight but insight: specifically, technically capable people in government positions who can monitor AI’s progress and steer it if warranted down the road. He also argued that government regulation can sometimes nurture rather than stifle progress: for example, if government safety standards for self-driving cars can help reduce the number of self-driving-car accidents, then a public backlash is less likely and adoption of the new technology can be accelerated. The most safety-conscious AI companies might therefore favor regulation that forces less scrupulous competitors to match their high safety standards.

Yet another interesting legal controversy involves granting rights to machines. If self-driving cars cut the 32,000 annual U.S. traffic fatalities in half, perhaps carmakers won’t get 16,000 thank-you notes, but 16,000 lawsuits. So if a self-driving car causes an accident, who should be liable—its occupants, its owner or its manufacturer? Legal scholar David Vladeck has proposed a fourth answer: the car itself! Specifically, he proposes that self-driving cars be allowed (and required) to hold car insurance. This way, models with a sterling safety record will qualify for premiums that are very low, probably lower than what’s available to human drivers, while poorly designed models from sloppy manufacturers will only qualify for insurance policies that make them prohibitively expensive to own.

But if machines such as cars are allowed to hold insurance policies, should they also be able to own money and property? If so, there’s nothing legally stopping smart computers from making money on the stock market and using it to buy online services. Once a computer starts paying humans to work for it, it can accomplish anything that humans can do. If AI systems eventually get better than humans at investing (which they already are in some domains), this could lead to a situation where most of our economy is owned and controlled by machines. Is this what we want? If it sounds far-off, consider that most of our economy is already owned by another form of non-human entity: corporations, which are often more powerful than any one person in them and can to some extent take on life of their own.

If you’re OK with granting machines the rights to own property, then how about granting them the right to vote? If so, should each computer program get one vote, even though it can trivially make trillions of copies of itself in the cloud if it’s rich enough, thereby guaranteeing that it will decide all elections? If not, then on what moral basis are we discriminating against machine minds relative to human minds? Does it make a difference if machine minds are conscious in the sense of having a subjective experience like we do? We’ll explore in greater depth these controversial questions related to computer control of our world in the next chapter, and questions related to machine consciousness in chapter 8.

Weapons

Since time immemorial, humanity has suffered from famine, disease and war. We’ve already mentioned how AI may help reduce famine and disease, so how about war? Some argue that nuclear weapons deter war between the countries that own them because they’re so horrifying, so how about letting all nations build even more horrifying AI-based weapons in the hope of ending all war forever? If you’re unpersuaded by that argument and believe that future wars are inevitable, how about using AI to make these wars more humane? If wars consist merely of machines fighting machines, then no human soldiers or civilians need get killed. Moreover, future AI-powered drones and other autonomous weapon systems (AWS; also known by their opponents as “killer robots”) can hopefully be made more fair and rational than human soldiers: equipped with superhuman sensors and unafraid of getting killed, they might remain cool, calculating and level-headed even in the heat of battle, and be less likely to accidentally kill civilians.

Figure 34 Whereas todays military drones such as this US Air Force MQ1 - фото 24

Figure 3.4: Whereas today’s military drones (such as this U.S. Air Force MQ-1 Predator) are remote-controlled by humans, future AI-powered drones have the potential to take humans out of the loop, using an algorithm to decide whom to target and kill.

A Human in the Loop

But what if the automated systems are buggy, confusing or don’t behave as expected? The U.S. Phalanx system for Aegis-class cruisers automatically detects, tracks and attacks threats such as anti-ship missiles and aircraft. The USS Vincennes was a guided missile cruiser nicknamed Robocruiser in reference to its Aegis system, and on July 3, 1988, in the midst of a skirmish with Iranian gunboats during the Iran-Iraq war, its radar system warned of an incoming aircraft. Captain William Rodgers III inferred that they were being attacked by a diving Iranian F-14 fighter jet and gave the Aegis system approval to fire. What he didn’t realize at the time was that they shot down Iran Air Flight 655, a civilian Iranian passenger jet, killing all 290 people on board and causing international outrage. Subsequent investigation implicated a confusing user interface that didn’t automatically show which dots on the radar screen were civilian planes (Flight 655 followed its regular daily flight path and had its civilian aircraft transponder on) or which dots were descending (as for an attack) vs. ascending (as Flight 655 was doing after takeoff from Tehran). Instead, when the automated system was queried for information about the mysterious aircraft, it reported “descending” because that was the status of a different aircraft to which it had confusingly reassigned a number used by the navy to track planes: what was descending was instead a U.S. surface combat air patrol plane operating far away in the Gulf of Oman.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence»

Представляем Вашему вниманию похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence»

Обсуждение, отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x