Daniel Gardner - The Science of Fear

Здесь есть возможность читать онлайн «Daniel Gardner - The Science of Fear» весь текст электронной книги совершенно бесплатно (целиком полную версию без сокращений). В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. ISBN: , Издательство: Penguin Group (USA) Incorporated, Жанр: Психология, Политика, Прочая научная литература, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

The Science of Fear: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «The Science of Fear»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

The Science of Fear — читать онлайн бесплатно полную книгу (весь текст) целиком

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «The Science of Fear», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

The media, too, know the value of fear. The media are in the business of profit, and crowding in the information marketplace means the competition for eyes and ears is steadily intensifying. Inevitably and increasingly, the media turn to fear to protect shrinking market shares because a warning of mortal peril—“A story you can’t afford to miss!”—is an excellent way to get someone’s attention.

But this is far from a complete explanation. What about the serious risks we don’t pay much attention to? There’s often money to be made dealing with them, but still we are unmoved. And the media, to be fair, occasionally cast cold water on panics and unreasonable fears, while corporations, activists, and politicians sometimes find it in their interest to play down genuine concerns—as the British government tried and failed to do in the early 1990s, when there was growing evidence linking BSE (mad cow disease) in cattle and a variant of the Creutzfeldt-Jakob disease in humans. The link was real. The government insisted it wasn’t. A cabinet minister even went so far as to hold a press conference at which he fed his four-year-old daughter a hamburger made of British beef.

Clearly, there’s much more than self-interest and marketing involved. There’s culture, for one. Whether we fear this risk or that—or dismiss another as no cause for concern—often depends on our cultural values. Marijuanais a perfect example. Since the days of Depression-era black jazz musicians, pot has been associated with a hipster counterculture. Today, the young backpacker wearing a T-shirt with the famous multi-leaf symbol on it isn’t expressing his love of horticulture—it’s a statement of cultural identity. Someone like that will have a very strong inclination to dismiss any claim that marijuana may cause harm as nothing more than old-fashioned reefer madness. The same is true in reverse: For social conservatives, that cluster of leaves is a symbol of the anarchic liberalism they despise, and they will consider any evidence that marijuana causes harm to be vindication— while downplaying or simply ignoring evidence to the contrary.

Psychologists call this confirmation bias. We all do it. Once a belief is in place, we screen what we see and hear in a biased way that ensures our beliefs are “proven” correct. Psychologists have also discovered that people are vulnerable to something called group polarization—which means that when people who share beliefs get together in groups, they become more convinced that their beliefs are right and they become more extreme in their views. Put confirmation bias, group polarization, and culture together, and we start to understand why people can come to completely different views about which risks are frightening and which aren’t worth a second thought.

But that’s not the end of psychology’s role in understanding risk. Far from it. The real starting point for understanding why we worry and why we don’t is the individual human brain.

Four decades ago, scientists knew little about how humans perceived risks, how we judged which risks to fear and which to ignore, and how we decided what to do about them. But in the 1960s, pioneers like Paul Slovic, today a professor at the University of Oregon, set to work. They made startling discoveries, and over the ensuing decades, a new body of science grew. The implications of this new science were enormous for a whole range of different fields. In 2002, one of the major figures in this research, Daniel Kahneman, won the Nobel Prize in economics, even though Kahneman is a psychologist who never took so much as a single class in economics.

What the psychologists discovered is that a very old idea is right. Every human brain has not one but two systems of thought. They called them System One and System Two. The ancient Greeks—who arrived at this conception of humanity a little earlier than scientists—personified the two systems in the form of the gods Dionysus and Apollo. We know them better as Feeling and Reason.

System Two is Reason. It works slowly. It examines evidence. It calculates and considers. When Reason makes a decision, it’s easy to put into words and explain.

System One—Feeling—is entirely different. Unlike Reason, it works without our conscious awareness and is as fast as lightning. Feeling is the source of the snap judgments that we experience as a hunch or an intuition or as emotions like unease, worry, or fear. A decision that comes from Feeling is hard or even impossible to explain in words. You don’t know why you feel the way you do, you just do.

System One works as quickly as it does because it uses built-in rules of thumb and automatic settings. Say you’re about to take a walk at midday in Los Angeles. You may think, “What’s the risk? Am I safe?” Instantly, your brain will seek to retrieve examples of other people being attacked, robbed, or murdered in similar circumstances. If it comes up with one or more examples easily, System One will sound the alarm: The risk is high! Be afraid! And you will be. You won’t know why, really, because System One’s operations are unconscious. You’ll just have an uneasy feeling that taking a walk is dangerous—a feeling you would have trouble explaining to someone else.

What System One did is apply a simple rule of thumb: If examples of something can be recalled easily, that thing must be common. Psychologists call this the “availability heuristic.”

Obviously, System One is both brilliant and flawed. It is brilliant because the simple rules of thumb System One uses allow it to assess a situation and render a judgment in an instant—which is exactly what you need when you see a shadow move at the back of an alley and you don’t have the latest crime statistics handy. But System One is also flawed because the same rules of thumb can generate irrational conclusions.

You may have just watched the evening news and seen a shocking report about someone like you being attacked in a quiet neighborhood at midday in Dallas. That crime may have been in another city in another state. It may have been a very unusual, even bizarre, crime—the very qualities that got it on the evening news across the country. And it may be that if you think about this a little—if you get System Two involved—you would agree that this example really doesn’t tell you much about your chance of being attacked, which, according to the statistics, is incredibly tiny. But none of that matters. All that System One knows is that the example was recalled easily. Based on that alone, it concludes the risk is high and it triggers the alarm—and you feel afraid when you really shouldn’t.

Scientists have discovered that this Example Rule is only one of many rules and automatic settings used by System One. These devices often function smoothly and efficiently. But sometimes they produce results that make no sense. Consider the terms 1 percent and 1 in 100 . They mean exactly the same thing. But as Paul Slovic discovered, System One will lead people to judge a risk to be much higher if they are told it is “1 in 100” than if it is described as “1 percent.”

The problem is that System One wasn’t created for the world we live in. For almost the entire history of our species and those that came before, our ancestors lived in small nomadic bands that survived by hunting animals and gathering plants. It was in that long and long-ago era that evolution shaped and molded System One. Having been forged by that environment, System One works quite well in it.

But today, very few human beings spend their days stalking antelope and avoiding lions. We live in a world transformed by technology—a world in which risks are measured in microns and parts-per-million and we are bombarded with images and information from all over the planet.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «The Science of Fear»

Представляем Вашему вниманию похожие книги на «The Science of Fear» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «The Science of Fear»

Обсуждение, отзывы о книге «The Science of Fear» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x