Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence

Здесь есть возможность читать онлайн «Макс Тегмарк - Life 3.0 - Being Human in the Age of Artificial Intelligence» весь текст электронной книги совершенно бесплатно (целиком полную версию без сокращений). В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Год выпуска: 2017, ISBN: 2017, Издательство: Knopf Doubleday Publishing Group, Жанр: Прочая научная литература, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Life 3.0: Being Human in the Age of Artificial Intelligence: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Life 3.0: Being Human in the Age of Artificial Intelligence»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology--and there's nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who's helped mainstream research on how to keep AI beneficial.
How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today's kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle?
What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn't shy away from the full range of viewpoints or from the most controversial issues -- from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.

Life 3.0: Being Human in the Age of Artificial Intelligence — читать онлайн бесплатно полную книгу (весь текст) целиком

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Life 3.0: Being Human in the Age of Artificial Intelligence», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

In the same way, I suspect that there are simpler ways to build human-level thinking machines than the solution evolution came up with, and even if we one day manage to replicate or upload brains, we’ll end up discovering one of those simpler solutions first. It will probably draw more than the twelve watts of power that your brain uses, but its engineers won’t be as obsessed about energy efficiency as evolution was—and soon enough, they’ll be able to use their intelligent machines to design more energy-efficient ones.

What Will Actually Happen?

The short answer is obviously that we have no idea what will happen if humanity succeeds in building human-level AGI. For this reason, we’ve spent this chapter exploring a broad spectrum of scenarios. I’ve attempted to be quite inclusive, spanning the full range of speculations I’ve seen or heard discussed by AI researchers and technologists: fast takeoff/slow takeoff/no takeoff, humans/machines/cyborgs in control, one/many centers of power, etc. Some people have told me that they’re sure that this or that won’t happen. However, I think it’s wise to be humble at this stage and acknowledge how little we know, because for each scenario discussed above, I know at least one well-respected AI researcher who views it as a real possibility.

As time passes and we reach certain forks in the road, we’ll start to answer key questions and narrow down the options. The first big question is “Will we ever create human-level AGI?” The premise of this chapter is that we will, but there are AI experts who think it will never happen, at least not for hundreds of years. Time will tell! As I mentioned earlier, about half of the AI experts at our Puerto Rico conference guessed that it would happen by 2055. At a follow-up conference we organized two years later, this had dropped to 2047.

Before any human-level AGI is created, we may start getting strong indications about whether this milestone is likely to be first met by computer engineering, mind uploading or some unforeseen novel approach. If the computer engineering approach to AI that currently dominates the field fails to deliver AGI for centuries, this will increase the chance that uploading will get there first, as happened (rather unrealistically) in the movie Transcendence.

If human-level AGI gets more imminent, we’ll be able to make more educated guesses about the answer to the next key question: “Will there be a fast takeoff, a slow takeoff or no takeoff?” As we saw above, a fast takeoff makes world takeover easier, while a slow one makes an outcome with many competing players more likely. Nick Bostrom dissects this question of takeoff speed in an analysis of what he calls optimization power and recalcitrance , which are basically the amount of quality effort to make AI smarter and the difficulty of making progress, respectively. The average rate of progress clearly increases if more optimization power is brought to bear on the task and decreases if more recalcitrance is encountered. He makes arguments for why the recalcitrance might either increase or decrease as the AGI reaches and transcends human level, so keeping both options on the table is a safe bet. Turning to the optimization power, however, it’s overwhelmingly likely that it will grow rapidly as the AGI transcends human level, for the reasons we saw in the Omega scenario: the main input to further optimization comes not from people but from the machine itself, so the more capable it gets, the faster it improves (if recalcitrance stays fairly constant).

For any process whose power grows at a rate proportional to its current power, the result is that its power keeps doubling at regular intervals. We call such growth exponential , and we call such processes explosions . If baby-making power grows in proportion to the size of the population, we can get a population explosion. If the creation of neutrons capable of fissioning plutonium grows in proportion to the number of such neutrons, we can get a nuclear explosion. If machine intelligence grows at a rate proportional to the current power, we can get an intelligence explosion. All such explosions are characterized by the time they take to double their power. If that time is hours or days for an intelligence explosion, as in the Omega scenario, we have a fast takeoff on our hands.

This explosion timescale depends crucially on whether improving the AI requires merely new software (which can be created in a matter of seconds, minutes or hours) or new hardware (which might require months or years). In the Omega scenario, there was a significant hardware overhang, in Bostrom’s terminology: the Omegas had compensated for the low quality of their original software by vast amounts of hardware, which meant that Prometheus could perform a large number of quality doublings by improving its software alone. There was also a major content overhang in the form of much of the internet’s data; Prometheus 1.0 was still not smart enough to make use of most of it, but once Prometheus’ intelligence grew, the data it needed for further learning was already available without delay.

The hardware and electricity costs of running the AI are crucial as well, since we won’t get an intelligence explosion until the cost of doing human-level work drops below human-level hourly wages. Suppose, for example, that the first human-level AGI can be efficiently run on the Amazon cloud at a cost of $1 million per hour of human-level work produced. This AI would have great novelty value and undoubtedly make headlines, but it wouldn’t undergo recursive self-improvement, because it would be much cheaper to keep using humans to improve it. Suppose that these humans gradually manage to cut the cost to $100,000/hour, $10,000/hour, $1,000/hour, $100/hour, $10/hour and finally $1/hour. By the time the cost of using the computer to reprogram itself finally drops far below the cost of paying human programmers to do the same, the humans can be laid off and the optimization power greatly expanded by buying cloud-computing time. This produces further cost cuts, allowing still more optimization power, and the intelligence explosion has begun.

This leaves us with our final key question: “Who or what will control the intelligence explosion and its aftermath, and what are their/its goals?” We’ll explore possible goals and outcomes in the next chapter and more deeply in chapter 7. To sort out the control issue, we need to know both how well an AI can be controlled, and how much an AI can control.

In terms of what will ultimately happen, you’ll currently find serious thinkers all over the map: some contend that the default outcome is doom, while others insist that an awesome outcome is virtually guaranteed. To me, however, this query is a trick question: it’s a mistake to passively ask “what will happen,” as if it were somehow predestined! If a technologically superior alien civilization arrived tomorrow, it would indeed be appropriate to wonder “what will happen” as their spaceships approached, because their power would probably be so far beyond ours that we’d have no influence over the outcome. If a technologically superior AI-fueled civilization arrives because we built it, on the other hand, we humans have great influence over the outcome—influence that we exerted when we created the AI. So we should instead ask: “What should happen? What future do we want?” In the next chapter, we’ll explore a wide spectrum of possible aftermaths of the current race toward AGI, and I’m quite curious how you’d rank them from best to worst. Only once we’ve thought hard about what sort of future we want will we be able to begin steering a course toward a desirable future. If we don’t know what we want, we’re unlikely to get it.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence»

Представляем Вашему вниманию похожие книги на «Life 3.0: Being Human in the Age of Artificial Intelligence» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence»

Обсуждение, отзывы о книге «Life 3.0: Being Human in the Age of Artificial Intelligence» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x