Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence

Здесь есть возможность читать онлайн «Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence» весь текст электронной книги совершенно бесплатно (целиком полную версию без сокращений). В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Год выпуска: 2017, ISBN: 2017, Издательство: Knopf Doubleday Publishing Group, Жанр: Прочая научная литература, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Life 3.0: Being Human in the Age of Artificial Intelligence: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Life 3.0: Being Human in the Age of Artificial Intelligence»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology--and there's nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who's helped mainstream research on how to keep AI beneficial.
How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today's kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle?
What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn't shy away from the full range of viewpoints or from the most controversial issues -- from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.

Life 3.0: Being Human in the Age of Artificial Intelligence — читать онлайн бесплатно полную книгу (весь текст) целиком

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Life 3.0: Being Human in the Age of Artificial Intelligence», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Although thinkers have pondered the mystery of consciousness for thousands of years, the rise of AI adds a sudden urgency, in particular to the question of predicting which intelligent entities have subjective experiences. As we saw in chapter 3, the question of whether intelligent machines should be granted some form of rights depends crucially on whether they’re conscious and can suffer or feel joy. As we discussed in chapter 7, it becomes hopeless to formulate utilitarian ethics based on maximizing positive experiences without knowing which intelligent entities are capable of having them. As mentioned in chapter 5, some people might prefer their robots to be unconscious to avoid feeling slave-owner guilt. On the other hand, they may desire the opposite if they upload their minds to break free from biological limitations: after all, what’s the point of uploading yourself into a robot that talks and acts like you if it’s a mere unconscious zombie, by which I mean that being the uploaded you doesn’t feel like anything? Isn’t this equivalent to committing suicide from your subjective point of view, even though your friends may not realize that your subjective experience has died?

For the long-term cosmic future of life (chapter 6), understanding what’s conscious and what’s not becomes pivotal: if technology enables intelligent life to flourish throughout our Universe for billions of years, how can we be sure that this life is conscious and able to appreciate what’s happening? If not, then would it be, in the words of the famous physicist Erwin Schrödinger, “a play before empty benches, not existing for anybody, thus quite properly speaking not existing”?2 In other words, if we enable high-tech descendants that we mistakenly think are conscious, would this be the ultimate zombie apocalypse, transforming our grand cosmic endowment into nothing but an astronomical waste of space?

What Is Consciousness?

Many arguments about consciousness generate more heat than light because the antagonists are talking past each other, unaware that they’re using different definitions of the C-word. Just as with “life” and “intelligence,” there’s no undisputed correct definition of the word “consciousness.” Instead, there are many competing ones, including sentience, wakefulness, self-awareness, access to sensory input and ability to fuse information into a narrative.3 In our exploration of the future of intelligence, we want to take a maximally broad and inclusive view, not limited to the sorts of biological consciousness that exist so far. That’s why the definition I gave in chapter 1, which I’m sticking with throughout this book, is very broad:

consciousness = subjective experience

In other words, if it feels like something to be you right now, then you’re conscious. It’s this particular definition of consciousness that gets to the crux of all the AI-motivated questions in the previous section: Does it feel like something to be Prometheus, AlphaGo or a self-driving Tesla?

To appreciate how broad our consciousness definition is, note that it doesn’t mention behavior, perception, self-awareness, emotions or attention. So by this definition, you’re conscious also when you’re dreaming, even though you lack wakefulness or access to sensory input and (hopefully!) aren’t sleepwalking and doing things. Similarly, any system that experiences pain is conscious in this sense, even if it can’t move. Our definition leaves open the possibility that some future AI systems may be conscious too, even if they exist merely as software and aren’t connected to sensors or robotic bodies.

With this definition, it’s hard not to care about consciousness. As Yuval Noah Harari puts it in his book Homo Deus: 4 “If any scientist wants to argue that subjective experiences are irrelevant, their challenge is to explain why torture or rape are wrong without reference to any subjective experience.” Without such reference, it’s all just a bunch of elementary particles moving around according to the laws of physics—and what’s wrong with that?

What’s the Problem?

So what precisely is it that we don’t understand about consciousness? Few have thought harder about this question than David Chalmers, a famous Australian philosopher rarely seen without a playful smile and a black leather jacket—which my wife liked so much that she gave me a similar one for Christmas. He followed his heart into philosophy despite making the finals at the International Mathematics Olympiad—and despite the fact that his only B grade in college, shattering his otherwise straight As, was for an introductory philosophy course. Indeed, he seems utterly undeterred by put-downs or controversy, and I’ve been astonished by his ability to politely listen to uninformed and misguided criticism of his own work without even feeling the need to respond.

As David has emphasized, there are really two separate mysteries of the mind. First, there’s the mystery of how a brain processes information, which David calls the “easy” problems. For example, how does a brain attend to, interpret and respond to sensory input? How can it report on its internal state using language? Although these questions are actually extremely difficult, they’re by our definitions not mysteries of consciousness, but mysteries of intelligence: they ask how a brain remembers, computes and learns. Moreover, we saw in the first part of the book how AI researchers have started to make serious progress on solving many of these “easy problems” with machines—from playing Go to driving cars, analyzing images and processing natural language.

Then there’s the separate mystery of why you have a subjective experience, which David calls the hard problem. When you’re driving, you’re experiencing colors, sounds, emotions, and a feeling of self. But why are you experiencing anything at all? Does a self-driving car experience anything at all? If you’re racing against a self-driving car, you’re both inputting information from sensors, processing it and outputting motor commands. But subjectively experiencing driving is something logically separate—is it optional, and if so, what causes it?

I approach this hard problem of consciousness from a physics point of view. From my perspective, a conscious person is simply food, rearranged. So why is one arrangement conscious, but not the other? Moreover, physics teaches us that food is simply a large number of quarks and electrons, arranged in a certain way. So which particle arrangements are conscious and which aren’t? *1

Figure 81 Understanding the mind involves a hierarchy of problems What David - фото 46

Figure 8.1: Understanding the mind involves a hierarchy of problems. What David Chalmers calls the “easy” problems can be posed without mentioning subjective experience. The apparent fact that some but not all physical systems are conscious poses three separate questions. If we have a theory for answering the question that defines the “pretty hard problem,” then it can be experimentally tested. If it works, then we can build on it to tackle the tougher questions above.

What I like about this physics perspective is that it transforms the hard problem that we as humans have struggled with for millennia into a more focused version that’s easier to tackle with the methods of science. Instead of starting with a hard problem of why an arrangement of particles can feel conscious, let’s start with a hard fact that some arrangements of particles do feel conscious while others don’t. For example, you know that the particles that make up your brain are in a conscious arrangement right now, but not when you’re in deep dreamless sleep.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence»

Представляем Вашему вниманию похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence»

Обсуждение, отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x