Ray Kurzweil - How to Create a Mind - The Secret of Human Thought Revealed

Здесь есть возможность читать онлайн «Ray Kurzweil - How to Create a Mind - The Secret of Human Thought Revealed» весь текст электронной книги совершенно бесплатно (целиком полную версию без сокращений). В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Год выпуска: 2012, ISBN: 2012, Издательство: Penguin, Жанр: Прочая научная литература, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

How to Create a Mind: The Secret of Human Thought Revealed: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «How to Create a Mind: The Secret of Human Thought Revealed»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

Ray Kurzweil, the bold futurist and author of The New York Times bestseller The Singularity Is Near, is arguably today’s most influential technological visionary. A pioneering inventor and theorist, he has explored for decades how artificial intelligence can enrich and expand human capabilities.
Now, in his much-anticipated How to Create a Mind, he takes this exploration to the next step: reverse-engineering the brain to understand precisely how it works, then applying that knowledge to create vastly intelligent machines.
Drawing on the most recent neuroscience research, his own research and inventions in artificial intelligence, and compelling thought experiments, he describes his new theory of how the neocortex (the thinking part of the brain) works: as a self-organizing hierarchical system of pattern recognizers. Kurzweil shows how these insights will enable us to greatly extend the powers of our own mind and provides a roadmap for the creation of superintelligence—humankind's most exciting next venture. We are now at the dawn of an era of radical possibilities in which merging with our technology will enable us to effectively address the world’s grand challenges.
How to Create a Mind is certain to be one of the most widely discussed and debated science books in many years—a touchstone for any consideration of the path of human progress.

How to Create a Mind: The Secret of Human Thought Revealed — читать онлайн бесплатно полную книгу (весь текст) целиком

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «How to Create a Mind: The Secret of Human Thought Revealed», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

In our work in speech recognition, we found that it is necessary to encode this type of information in order to recognize speech patterns. For example, the words “step” and “steep” are very similar. Although the [e] phoneme in “step” and the [E] in “steep” are somewhat different vowel sounds (in that they have different resonant frequencies), it is not reliable to distinguish these two words based on these often confusable vowel sounds. It is much more reliable to consider the observation that the [e] in “step” is relatively brief compared with the [E] in “steep.”

We can encode this type of information with two numbers for each input: the expected size and the degree of variability of that size. In our “steep” example, [t] and [p] would both have a very short expected duration as well as a small expected variability (that is, we do not expect to hear long t’s and p’s). The [s] sound would have a short expected duration but a larger variability because it is possible to drag it out. The [E] sound has a long expected duration as well as a high degree of variability.

In our speech examples, the “size” parameter refers to duration, but time is only one possible dimension. In our work in character recognition, we found that comparable spatial information was important in order to recognize printed letters (for example the dot over the letter “i” is expected to be much smaller than the portion under the dot). At much higher levels of abstraction, the neocortex will deal with patterns with all sorts of continuums, such as levels of attractiveness, irony, happiness, frustration, and myriad others. We can draw similarities across rather diverse continuums, as Darwin did when he related the physical size of geological canyons to the amount of differentiation among species.

In a biological brain, the source of these parameters comes from the brain’s own experience. We are not born with an innate knowledge of phonemes; indeed different languages have very different sets of them. This implies that multiple examples of a pattern are encoded in the learned parameters of each pattern recognizer (as it requires multiple instances of a pattern to ascertain the expected distribution of magnitudes of the inputs to the pattern). In some AI systems, these types of parameters are hand-coded by experts (for example, linguists who can tell us the expected durations of different phonemes, as I articulated above). In my own work, we found that having an AI system discover these parameters on its own from training data (similar to the way the brain does it) was a superior approach. Sometimes we used a hybrid approach; that is, we primed the system with the intuition of human experts (for the initial settings of the parameters) and then had the AI system automatically refine these estimates using a learning process from real examples of speech.

What the pattern recognition module is doing is computing the probability (that is, the likelihood based on all of its previous experience) that the pattern that it is responsible for recognizing is in fact currently represented by its active inputs. Each particular input to the module is active if the corresponding lower-level pattern recognizer is firing (meaning that that lower-level pattern was recognized). Each input also encodes the observed size (on some appropriate dimension such as temporal duration or physical magnitude or some other continuum) so that the size can be compared (with the stored size parameters for each input) by the module in computing the overall probability of the pattern.

How does the brain (and how can an AI system) compute the overall probability that the pattern (that the module is responsible for recognizing) is present given (1) the inputs (each with an observed size), (2) the stored parameters on size (the expected size and the variability of size) for each input, and (3) the parameters of the importance of each input? In the 1980s and 1990s, I and others pioneered a mathematical method called hierarchical hidden Markov models for learning these parameters and then using them to recognize hierarchical patterns. We used this technique in the recognition of human speech as well as the understanding of natural language. I describe this approach further in chapter 7.

Getting back to the flow of recognition from one level of pattern recognizers to the next, in the above example we see the information flow up the conceptual hierarchy from basic letter features to letters to words. Recognitions will continue to flow up from there to phrases and then more complex language structures. If we go up several dozen more levels, we get to higher-level concepts like irony and envy. Even though every pattern recognizer is working simultaneously, it does take time for recognitions to move upward in this conceptual hierarchy. Traversing each level takes between a few hundredths to a few tenths of a second to process. Experiments have shown that a moderately high-level pattern such as a face takes at least a tenth of a second. It can take as long as an entire second if there are significant distortions. If the brain were sequential (like conventional computers) and was performing each pattern recognition in sequence, it would have to consider every possible low-level pattern before moving on to the next level. Thus it would take many millions of cycles just to go through each level. That is exactly what happens when we simulate these processes on a computer. Keep in mind, however, that computers process millions of times faster than our biological circuits.

A very important point to note here is that information flows down the conceptual hierarchy as well as up. If anything, this downward flow is even more significant. If, for example, we are reading from left to right and have already seen and recognized the letters “A,” “P,” “P,” and “L,” the “APPLE” recognizer will predict that it is likely to see an “E” in the next position. It will send a signal down to the “E” recognizer saying, in effect, “Please be aware that there is a high likelihood that you will see your ‘E’ pattern very soon, so be on the lookout for it.” The “E” recognizer then adjusts its threshold such that it is more likely to recognize an “E.” So if an image appears next that is vaguely like an “E,” but is perhaps smudged such that it would not have been recognized as an “E” under “normal” circumstances, the “E” recognizer may nonetheless indicate that it has indeed seen an “E,” since it was expected.

The neocortex is, therefore, predicting what it expects to encounter. Envisaging the future is one of the primary reasons we have a neocortex. At the highest conceptual level, we are continually making predictions—who is going to walk through the door next, what someone is likely to say next, what we expect to see when we turn the corner, the likely results of our own actions, and so on. These predictions are constantly occurring at every level of the neocortex hierarchy. We often misrecognize people and things and words because our threshold for confirming an expected pattern is too low.

In addition to positive signals, there are also negative or inhibitory signals which indicate that a certain pattern is less likely to exist. These can come from lower conceptual levels (for example, the recognition of a mustache will inhibit the likelihood that a person I see in the checkout line is my wife), or from a higher level (for example, I know that my wife is on a trip, so the person in the checkout line can’t be she). When a pattern recognizer receives an inhibitory signal, it raises the recognition threshold, but it is still possible for the pattern to fire (so if the person in line really is her, I may still recognize her).

The Nature of the Data Flowing into a Neocortical Pattern Recognizer

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «How to Create a Mind: The Secret of Human Thought Revealed»

Представляем Вашему вниманию похожие книги на «How to Create a Mind: The Secret of Human Thought Revealed» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «How to Create a Mind: The Secret of Human Thought Revealed»

Обсуждение, отзывы о книге «How to Create a Mind: The Secret of Human Thought Revealed» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x