Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence

Здесь есть возможность читать онлайн «Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence» весь текст электронной книги совершенно бесплатно (целиком полную версию без сокращений). В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Год выпуска: 2017, ISBN: 2017, Издательство: Knopf Doubleday Publishing Group, Жанр: Прочая научная литература, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Life 3.0: Being Human in the Age of Artificial Intelligence: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Life 3.0: Being Human in the Age of Artificial Intelligence»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology--and there's nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who's helped mainstream research on how to keep AI beneficial.
How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today's kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle?
What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn't shy away from the full range of viewpoints or from the most controversial issues -- from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.

Life 3.0: Being Human in the Age of Artificial Intelligence — читать онлайн бесплатно полную книгу (весь текст) целиком

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Life 3.0: Being Human in the Age of Artificial Intelligence», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

As we’ve explored above, the only reason that we humans have any preferences at all may be that we’re the solution to an evolutionary optimization problem. Thus all normative words in our human language, such as “delicious,” “fragrant,” “beautiful,” “comfortable,” “interesting,” “sexy,” “meaningful,” “happy” and “good,” trace their origin to this evolutionary optimization: there is therefore no guarantee that a superintelligent AI would find them rigorously definable. Even if the AI learned to accurately predict the preferences of some representative human, it wouldn’t be able to compute the goodness function for most particle arrangements: the vast majority of possible particle arrangements correspond to strange cosmic scenarios with no stars, planets or people whatsoever, with which humans have no experience, so who is to say how “good” they are?

There are of course some functions of the cosmic particle arrangement that can be rigorously defined, and we even know of physical systems that evolve to maximize some of them. For example, we’ve already discussed how many systems evolve to maximize their entropy, which in the absence of gravity eventually leads to heat death, where everything is boringly uniform and unchanging. So entropy is hardly something we would want our AI to call “goodness” and strive to maximize. Here are a few examples of other quantities that one could strive to maximize and that may be rigorously definable in terms of particle arrangements:

• The fraction of all the matter in our Universe that’s in the form of a particular organism, say humans or E. coli (inspired by evolutionary inclusive-fitness maximization)

• The ability of an AI to predict the future, which AI researcher Marcus Hutter argues is a good measure of its intelligence

• What AI researchers Alex Wissner-Gross and Cameron Freer term causal entropy (a proxy for future opportunities), which they argue is the hallmark of intelligence

• The computational capacity of our Universe

• The algorithmic complexity of our Universe (how many bits are needed to describe it)

• The amount of consciousness in our Universe (see next chapter)

However, when one starts with a physics perspective, where our cosmos consists of elementary particles in motion, it’s hard to see how one rather than another interpretation of “goodness” would naturally stand out as special. We have yet to identify any final goal for our Universe that appears both definable and desirable. The only currently programmable goals that are guaranteed to remain truly well-defined as an AI gets progressively more intelligent are goals expressed in terms of physical quantities alone, such as particle arrangements, energy and entropy. However, we currently have no reason to believe that any such definable goals will be desirable in guaranteeing the survival of humanity.

Contrariwise, it appears that we humans are a historical accident, and aren’t the optimal solution to any well-defined physics problem. This suggests that a superintelligent AI with a rigorously defined goal will be able to improve its goal attainment by eliminating us. This means that to wisely decide what to do about AI development, we humans need to confront not only traditional computational challenges, but also some of the most obdurate questions in philosophy. To program a self-driving car, we need to solve the trolley problem of whom to hit during an accident. To program a friendly AI, we need to capture the meaning of life. What’s “meaning”? What’s “life”? What’s the ultimate ethical imperative? In other words, how should we strive to shape the future of our Universe? If we cede control to a superintelligence before answering these questions rigorously, the answer it comes up with is unlikely to involve us. This makes it timely to rekindle the classic debates of philosophy and ethics, and adds a new urgency to the conversation!

THE BOTTOM LINE:

• The ultimate origin of goal-oriented behavior lies in the laws of physics, which involve optimization.

• Thermodynamics has the built-in goal of dissipation: to increase a measure of messiness that’s called entropy .

Life is a phenomenon that can help dissipate (increase overall messiness) even faster by retaining or growing its complexity and replicating while increasing the messiness of its environment.

• Darwinian evolution shifts the goal-oriented behavior from dissipation to replication.

• Intelligence is the ability to accomplish complex goals.

• Since we humans don’t always have the resources to figure out the truly optimal replication strategy, we’ve evolved useful rules of thumb that guide our decisions: feelings such as hunger, thirst, pain, lust and compassion.

• We therefore no longer have a simple goal such as replication; when our feelings conflict with the goal of our genes, we obey our feelings, as by using birth control.

• We’re building increasingly intelligent machines to help us accomplish our goals. Insofar as we build such machines to exhibit goal-oriented behavior, we strive to align the machine goals with ours.

• Aligning machine goals with our own involves three unsolved problems: making machines learn them, adopt them and retain them.

• AI can be created to have virtually any goal, but almost any sufficiently ambitious goal can lead to subgoals of self-preservation, resource acquisition and curiosity to understand the world better—the former two may potentially lead a superintelligent AI to cause problems for humans, and the latter may prevent it from retaining the goals we give it.

• Although many broad ethical principles are agreed upon by most humans, it’s unclear how to apply them to other entities, such as non-human animals and future AIs.

• It’s unclear how to imbue a superintelligent AI with an ultimate goal that neither is undefined nor leads to the elimination of humanity, making it timely to rekindle research on some of the thorniest issues in philosophy!

*1 A rule of thumb that many insects use for flying in a straight line is to assume that a bright light is the Sun and fly at a fixed angle relative to it. If the light turns out to be a nearby flame, this hack can unfortunately trick the bug into an inward death spiral.

*2 I’m using the term “improving its software” in the broadest possible sense, including not only optimizing its algorithms but also making its decision-making process more rational, so that it gets as good as possible at attaining its goals.

Chapter 8 Consciousness

I cannot imagine a consistent theory of everything that ignores consciousness.

Andrei Linde, 2002

We should strive to grow consciousness itself—to generate bigger, brighter lights in an otherwise dark universe.

Giulio Tononi, 2012

We’ve seen that AI can help us create a wonderful future if we manage to find answers to some of the oldest and toughest problems in philosophy—by the time we need them. We face, in Nick Bostrom’s words, philosophy with a deadline. In this chapter, let’s explore one of the thorniest philosophical topics of all: consciousness.

Who Cares?

Consciousness is controversial. If you mention the “C-word” to an AI researcher, neuroscientist or psychologist, they may roll their eyes. If they’re your mentor, they might instead take pity on you and try to talk you out of wasting your time on what they consider a hopeless and unscientific problem. Indeed, my friend Christof Koch, a renowned neuroscientist who leads the Allen Institute for Brain Science, told me that he was once warned of working on consciousness before he had tenure—by none less than Nobel laureate Francis Crick. If you look up “consciousness” in the 1989 Macmillan Dictionary of Psychology, you’re informed that “Nothing worth reading has been written on it.”1 As I’ll explain in this chapter, I’m more optimistic!

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence»

Представляем Вашему вниманию похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence»

Обсуждение, отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x