Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence

Здесь есть возможность читать онлайн «Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence» весь текст электронной книги совершенно бесплатно (целиком полную версию без сокращений). В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Год выпуска: 2017, ISBN: 2017, Издательство: Knopf Doubleday Publishing Group, Жанр: Прочая научная литература, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Life 3.0: Being Human in the Age of Artificial Intelligence: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Life 3.0: Being Human in the Age of Artificial Intelligence»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology--and there's nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who's helped mainstream research on how to keep AI beneficial.
How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today's kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle?
What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn't shy away from the full range of viewpoints or from the most controversial issues -- from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.

Life 3.0: Being Human in the Age of Artificial Intelligence — читать онлайн бесплатно полную книгу (весь текст) целиком

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Life 3.0: Being Human in the Age of Artificial Intelligence», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

THE BOTTOM LINE:

• Compared to cosmic timescales of billions of years, an intelligence explosion is a sudden event where technology rapidly plateaus at a level limited only by the laws of physics.

• This technological plateau is vastly higher than today’s technology, allowing a given amount of matter to generate about ten billion times more energy (using sphalerons or black holes), store 12–18 orders of magnitude more information or compute 31–41 orders of magnitude faster—or to be converted to any other desired form of matter.

• Superintelligent life would not only make such dramatically more efficient use of its existing resources, but would also be able to grow today’s biosphere by about 32 orders of magnitude by acquiring more resources through cosmic settlement at near light speed.

• Dark energy limits the cosmic expansion of superintelligent life and also protects it from distant expanding death bubbles or hostile civilizations. The threat of dark energy tearing cosmic civilizations apart motivates massive cosmic engineering projects, including wormhole construction if this turns out to be feasible.

• The main commodity shared or traded across cosmic distances is likely to be information.

• Barring wormholes, the light-speed limit on communication poses severe challenges for coordination and control across a cosmic civilization. A distant central hub may incentivize its superintelligent “nodes” to cooperate either through rewards or through threats, say by deploying a local guard AI programmed to destroy the node by setting off a supernova or quasar unless the rules are obeyed.

• The collision of two expanding civilizations may result in assimilation, cooperation or war, where the latter is arguably less likely than it is between today’s civilizations.

• Despite popular belief to the contrary, it’s quite plausible that we’re the only life form capable of making our observable Universe come alive in the future.

• If we don’t improve our technology, the question isn’t whether humanity will go extinct, but merely how: will an asteroid, a supervolcano, the burning heat of the aging Sun or some other calamity get us first?

• If we do keep improving our technology with enough care, foresight and planning to avoid pitfalls, life has the potential to flourish on Earth and far beyond for many billions of years, beyond the wildest dreams of our ancestors.

*1 If you work in the energy sector, you may be used to instead defining efficiency as the fraction of the energy released that’s in a useful form.

*2 If no suitable nature-made black hole can be found in the nearby universe, a new one can be created by putting lots of matter in a sufficiently small space.

*3 This is a slight oversimplification, because Hawking radiation also includes some particles from which it’s hard to extract useful work. Large black holes are only 90% efficient, because about 10% of the energy is radiated in the form of gravitons: extremely shy particles that are almost impossible to detect, let alone extract useful work from. As the black hole continues evaporating and shrinking, the efficiency drops further because the Hawking radiation starts including neutrinos and other massive particles.

*4 For Douglas Adams fans out there, note that this is an elegant question giving the answer to the question of life, the universe and everything. More precisely, the efficiency is 1 – 1/√3‾ ≈ 42%.

*5 If you feed the black hole by placing a gas cloud around it that rotates slowly in the same direction, then this gas will spin ever faster as it’s pulled in and eaten, boosting the black hole’s rotation, just as a figure-skater spins faster when pulling in her arms. This may keep the hole maximally spinning, enabling you to extract first 42% of the gas energy and then 29% of the remainder, for a total efficiency of 42% + (1-42%)×29% ≈ 59%.

*6 It needs to get hot enough to re-unify the electromagnetic and weak forces, which happens when particles move about as fast as when they’ve been accelerated by 200 billion volts in a particle collider.

*7 Above we only discussed matter made of atoms. There is about six times more dark matter, but it’s very elusive and hard to catch, routinely flying straight through Earth and out the other side, so it remains to be seen whether it’s possible for future life to capture and utilize it.

*8 The cosmic mathematics comes out remarkably simple: if the civilization expands through the expanding space not at the speed of light c but at some slower speed v, the number of galaxies settled gets reduced by a factor ( v/c ) 3. This means that slowpoke civilizations get severely penalized, with one that expands 10 times slower ultimately settling 1,000 times fewer galaxies.

*9 However, John Gribbin comes to a similar conclusion in his 2011 book Alone in the Universe . For a spectrum of intriguing perspectives on this question, I also recommend Paul Davies’ 2011 book The Eerie Silence .

Chapter 7 Goals

The mystery of human existence lies not in just staying alive, but in finding something to live for.

Fyodor Dostoyevsky, The Brothers Karamazov

Life is a journey, not a destination.

Ralph Waldo Emerson

If I had to summarize in a single word what the thorniest AI controversies are about, it would be “goals”: Should we give AI goals, and if so, whose goals? How can we give AI goals? Can we ensure that these goals are retained even if the AI gets smarter? Can we change the goals of an AI that’s smarter than us? What are our ultimate goals? These questions are not only difficult, but also crucial for the future of life: if we don’t know what we want, we’re less likely to get it, and if we cede control to machines that don’t share our goals, then we’re likely to get what we don’t want.

Physics: The Origin of Goals

To shed light on these questions, let’s first explore the ultimate origin of goals. When we look around us in the world, some processes strike us as goal-oriented while others don’t. Consider, for example, the process of a soccer ball being kicked for the game-winning shot. The behavior of the ball itself does not appear goal-oriented, and is most economically explained in terms of Newton’s laws of motion, as a reaction to the kick. The behavior of the player, on the other hand, is most economically explained not mechanistically in terms of atoms pushing each other around, but in terms of her having the goal of maximizing her team’s score. How did such goal-oriented behavior emerge from the physics of our early Universe, which consisted merely of a bunch of particles bouncing around seemingly without goals?

Intriguingly, the ultimate roots of goal-oriented behavior can be found in the laws of physics themselves, and manifest themselves even in simple processes that don’t involve life. If a lifeguard rescues a swimmer, as in figure 7.1, we expect her not to go in a straight line, but to run a bit further along the beach where she can go faster than in the water, thereby turning slightly when she enters the water. We naturally interpret her choice of trajectory as goal-oriented, since out of all possible trajectories, she’s deliberately choosing the optimal one that gets her to the swimmer as fast as possible. Yet a simple light ray similarly bends when it enters water (see figure 7.1), also minimizing the travel time to its destination! How can this be?

This is known in physics as Fermat’s principle, articulated in 1662, and it provides an alternative way of predicting the behavior of light rays. Remarkably, physicists have since discovered that all laws of classical physics can be mathematically reformulated in an analogous way: out of all ways that nature could choose to do something, it prefers the optimal way, which typically boils down to minimizing or maximizing some quantity. There are two mathematically equivalent ways of describing each physical law: either as the past causing the future, or as nature optimizing something. Although the second way usually isn’t taught in introductory physics courses because the math is tougher, I feel that it’s more elegant and profound. If a person is trying to optimize something (for example, their score, their wealth or their happiness) we’ll naturally describe their pursuit of it as goal-oriented. So if nature itself is trying to optimize something, then no wonder that goal-oriented behavior can emerge: it was hardwired in from the start, in the very laws of physics.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence»

Представляем Вашему вниманию похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence»

Обсуждение, отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x