Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence

Здесь есть возможность читать онлайн «Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence» весь текст электронной книги совершенно бесплатно (целиком полную версию без сокращений). В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Год выпуска: 2017, ISBN: 2017, Издательство: Knopf Doubleday Publishing Group, Жанр: Прочая научная литература, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Life 3.0: Being Human in the Age of Artificial Intelligence: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Life 3.0: Being Human in the Age of Artificial Intelligence»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology--and there's nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who's helped mainstream research on how to keep AI beneficial.
How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today's kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle?
What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn't shy away from the full range of viewpoints or from the most controversial issues -- from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.

Life 3.0: Being Human in the Age of Artificial Intelligence — читать онлайн бесплатно полную книгу (весь текст) целиком

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Life 3.0: Being Human in the Age of Artificial Intelligence», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

However, if superintelligence develops technology that can readily rearrange elementary particles into any form of matter whatsoever, then it will eliminate most of the incentive for long-distance trade. Why bother shipping silver between distant solar systems when it’s simpler and quicker to transmute copper into silver by rearranging its particles? Why bother shipping high-tech machinery between galaxies when both the know-how and the raw materials (any matter will do) exist in both places? My guess is that in a cosmos teeming with superintelligence, almost the only commodity worth shipping long distances will be information . The only exception might be matter to be used for cosmic engineering projects—for example, to counteract the aforementioned destructive tendency of dark energy to tear civilizations apart. As opposed to traditional human trade, this matter can be shipped in any convenient bulk form whatsoever, perhaps even as an energy beam, since the receiving superintelligence can rapidly rearrange it into whatever objects it wants.

If sharing or trading of information emerges as the main driver of cosmic cooperation, then what sorts of information might be involved? Any desirable information will be valuable if generating it requires a massive and time-consuming computational effort. For example, a superintelligence may want answers to hard scientific questions about the nature of physical reality, hard mathematical questions about theorems and optimal algorithms and hard engineering questions about how to best build spectacular technology. Hedonistic life forms may want awesome digital entertainment and simulated experiences, and cosmic commerce may fuel demand for some form of cosmic cryptocurrency in the spirit of bitcoins.

Such sharing opportunities may incentivize information flow not only between entities of roughly equal power, but also up and down power hierarchies, say between solar-system-sized nodes and a galactic hub or between galaxy-sized nodes and a cosmic hub. The nodes might want this for the pleasure of being part of something greater, for being provided with answers and technologies that they couldn’t develop alone and for defense against external threats. They may also value the promise of near immortality through backup: just as many humans take solace in a belief that their minds will live on after their physical bodies die, an advanced AI may appreciate having its mind and knowledge live on in a hub supercomputer after its original physical hardware has depleted its energy reserves.

Conversely, the hub may want its nodes to help it with massive long-term computing tasks where the results aren’t urgently needed, so that it’s worth waiting thousands or millions of years for the answers. As we explored above, the hub may also want its nodes to help carry out massive cosmic engineering projects such as counteracting destructive dark energy by moving galactic mass concentrations together. If traversable wormholes turn out to be possible and buildable, then a top priority of a hub will probably be constructing a network of them to thwart dark energy and keep its empire connected indefinitely. The questions of what ultimate goals a cosmic superintelligence may have is a fascinating and controversial one that we’ll explore further in chapter 7.

Controlling with the Stick

Terrestrial empires usually compel their subordinates to cooperate by using both the carrot and the stick. While subjects of the Roman Empire valued the technology, infrastructure and defense that they were offered as a reward for their cooperation, they also feared the inevitable repercussions of rebelling or not paying taxes. Because of the long time required to send troops from Rome to outlying provinces, part of the intimidation was delegated to local troops and loyal officials empowered to inflict near-instantaneous punishments. A superintelligent hub could use the analogous strategy of deploying a network of loyal guards throughout its cosmic empire. Since superintelligent subjects can be hard to control, the simplest viable strategy may be using AI guards that are programmed to be 100% loyal by virtue of being relatively dumb, simply monitoring whether all rules are obeyed and automatically triggering a doomsday device if not.

Suppose, for example, that the hub AI arranges for a white dwarf to be placed in the vicinity of a solar-system-sized civilization that it wishes to control. A white dwarf is the burnt-out husk of a modestly heavy star. Consisting largely of carbon, it resembles a giant diamond in the sky, and is so compact that it can weigh more than the Sun while being smaller than Earth. The Indian physicist Subrahmanyan Chandrasekhar famously proved that if you keep adding mass to it until it surpasses the Chandrasekhar limit, about 1.4 times the mass of our Sun, it will undergo a cataclysmic thermonuclear detonation known as a supernova of type 1A. If the hub AI has callously arranged for this white dwarf to be extremely close to its Chandrasekhar limit, the guard AI could be effective even if it were extremely dumb (indeed, largely because it was so dumb): it could be programmed to simply verify that the subjugated civilization had delivered its monthly quota of cosmic bitcoins, mathematical proofs or whatever other taxes were stipulated, and if not, toss enough mass onto the white dwarf to ignite the supernova and blow the entire region to smithereens.

Galaxy-sized civilizations may be similarly controllable by placing large numbers of compact objects into tight orbits around the monster black hole at the galaxy center, and threatening to transform these masses into gas, for instance by colliding them. This gas would then start feeding the black hole, transforming it into a powerful quasar, potentially rendering much of the galaxy uninhabitable.

In summary, there are strong incentives for future life to cooperate over cosmic distances, but it’s a wide-open question whether such cooperation will be based mainly on mutual benefits or on brutal threats—the limits imposed by physics appear to allow both scenarios, so the outcome will depend on the prevailing goals and values. We’ll explore our ability to influence these goals and values of future life in chapter 7.

When Civilizations Clash

So far, we’ve only discussed scenarios where life expands into our cosmos from a single intelligence explosion. But what happens if life evolves independently in more than one place and two expanding civilizations meet?

If you consider a random solar system, there’s some probability that life will evolve on one of its planets, develop advanced technology and expand into space. This probability seems to be greater than zero since technological life has evolved here in our Solar System and the laws of physics appear to allow space settlement. If space is large enough (indeed, the theory of cosmological inflation suggests it to be vast or infinite), then there will be many such expanding civilizations, as illustrated in figure 6.10. Jay Olson’s above-mentioned paper includes an elegant analysis of such expanding cosmic biospheres, and Toby Ord has performed a similar analysis with colleagues at the Future of Humanity Institute. Viewed in three dimensions, these cosmic biospheres are quite literally spheres as long as civilizations expand with the same speed in all directions. In spacetime, they look like the upper part of the champagne glass in figure 6.7, because dark energy ultimately limits how many galaxies each civilization can reach.

If the distance between neighboring space-settling civilizations is much larger than dark energy lets them expand, then they’ll never come into contact with each other or even find out about each other’s existence, so they’ll feel as if they’re alone in the cosmos. If our cosmos is more fecund so that neighbors are closer together, however, some civilizations will eventually overlap. What happens in these overlap regions? Will there be cooperation, competition or war?

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence»

Представляем Вашему вниманию похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence»

Обсуждение, отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x