This distorting effect is one of the challenges posed by personalized filters. Like a lens, the filter bubble invisibly transforms the world we experience by controlling what we see and don’t see. It interferes with the interplay between our mental processes and our external environment. In some ways, it can act like a magnifying glass, helpfully expanding our view of a niche area of knowledge. But at the same time, personalized filters limit what we are exposed to and therefore affect the way we think and learn. They can upset the delicate cognitive balance that helps us make good decisions and come up with new ideas. And because creativity is also a result of this interplay between mind and environment, they can get in the way of innovation. If we want to know what the world really looks like, we have to understand how filters shape and skew our view of it.
It’s become a bit in vogue to pick on the human brain. We’re “predictably irrational,” in the words of behavioral economist Dan Ariely’s bestselling book. Stumbling on Happiness author Dan Gilbert presents volumes of data to demonstrate that we’re terrible at figuring out what makes us happy. Like audience members at a magic show, we’re easily conned, manipulated, and misdirected.
All of this is true. But as Being Wrong author Kathryn Schulz points out, it’s only one part of the story. Human beings may be a walking bundle of miscalculations, contradictions, and irrationalities, but we’re built that way for a reason: The same cognitive processes that lead us down the road to error and tragedy are the root of our intelligence and our ability to cope with and survive in a changing world. We pay attention to our mental processes when they fail, but that distracts us from the fact that most of the time, our brains do amazingly well.
The mechanism for this is a cognitive balancing act. Without our ever thinking about it, our brains tread a tightrope between learning too much from the past and incorporating too much new information from the present. The ability to walk this line—to adjust to the demands of different environments and modalities—is one of human cognition’s most astonishing traits. Artificial intelligence has yet to come anywhere close.
In two important ways, personalized filters can upset this cognitive balance between strengthening our existing ideas and acquiring new ones. First, the filter bubble surrounds us with ideas with which we’re already familiar (and already agree), making us overconfident in our mental frameworks. Second, it removes from our environment some of the key prompts that make us want to learn. To understand how, we have to look at what’s being balanced in the first place, starting with how we acquire and store information.
Filtering isn’t a new phenomenon. It’s been around for millions of years—indeed, it was around before humans even existed. Even for animals with rudimentary senses, nearly all of the information coming in through their senses is meaningless, but a tiny sliver is important and sometimes life-preserving. One of the primary functions of the brain is to identify that sliver and decide what to do about it.
In humans, one of the first steps is to massively compress the data. As Nassim Nicholas Taleb says, “Information wants to be reduced,” and every second we reduce a lot of it—compressing most of what our eyes see and ears hear into concepts that capture the gist. Psychologists call these concepts schemata (one of them is a schema ), and they’re beginning to be able to identify particular neurons or sets of neurons that correlate with each one—firing, for example, when you recognize a particular object, like a chair. Schemata ensure that we aren’t constantly seeing the world anew: Once we’ve identified something as a chair, we know how to use it.
We don’t do this only with objects; we do it with ideas as well. In a study of how people read the news, researcher Doris Graber found that stories were relatively quickly converted into schemata for the purposes of memorization. “Details that do not seem essential at the time and much of the context of a story are routinely pared,” she writes in her book Processing the News. “Such leveling and sharpening involves condensation of all features of a story.” Viewers of a news segment on a child killed by a stray bullet might remember the child’s appearance and tragic background, but not the reportage that overall crime rates are down.
Schemata can actually get in the way of our ability to directly observe what’s happening. In 1981, researcher Claudia Cohen instructed subjects to watch a video of a woman celebrating her birthday. Some are told that she’s a waitress, while others are told she’s a librarian. Later, the groups are asked to reconstruct the scene. The people who are told she’s a waitress remember her having a beer; those told she was a librarian remember her wearing glasses and listening to classical music (the video shows her doing all three). The information that didn’t jibe with her profession was more often forgotten. In some cases, schemata are so powerful they can even lead to information being fabricated: Doris Graber, the news researcher, found that up to a third of her forty-eight subjects had added details to their memories of twelve television news stories shown to them, based on the schemata those stories activated.
Once we’ve acquired schemata, we’re predisposed to strengthen them. Psychological researchers call this confirmation bias—a tendency to believe things that reinforce our existing views, to see what we want to see.
One of the first and best studies of confirmation bias comes from the end of the college football season in 1951—Princeton versus Dartmouth. Princeton hadn’t lost a game all season. Its quarterback, Dick Kazmaier, had just been on the cover of Time . Things started off pretty rough, but after Kazmaier was sent off the field in the second quarter with a broken nose, the game got really dirty. In the ensuing melee, a Dartmouth player ended up with a broken leg.
Princeton won, but afterward there were recriminations in both college’s papers. Princetonians blamed Dartmouth for starting the low blows; Dartmouth thought Princeton had an ax to grind once their quarterback got hurt. Luckily, there were some psychologists on hand to make sense of the conflicting versions of events.
They asked groups of students from both schools who hadn’t seen the game to watch a film of it and count how many infractions each side made. Princeton students, on average, saw 9.8 infractions by Dartmouth; Dartmouth students thought their team was guilty of only 4.3. One Dartmouth alumnus who received a copy of the film complained that his version must be missing parts—he didn’t see any of the roughhousing he’d heard about. Boosters of each school saw what they wanted to see, not what was actually on the film.
Philip Tetlock, a political scientist, found similar results when he invited a variety of academics and pundits into his office and asked them to make predictions about the future in their areas of expertise. Would the Soviet Union fall in the next ten years? In what year would the U.S. economy start growing again? For ten years, Tetlock kept asking these questions. He asked them not only of experts, but also of folks he’d brought in off the street—plumbers and schoolteachers with no special expertise in politics or history. When he finally compiled the results, even he was surprised. It wasn’t just that the normal folks’ predictions beat the experts’. The experts’ predictions weren’t even close.
Why? Experts have a lot invested in the theories they’ve developed to explain the world. And after a few years of working on them, they tend to see them everywhere. For example, bullish stock analysts banking on rosy financial scenarios were unable to identify the housing bubble that nearly bankrupted the economy—even though the trends that drove it were pretty clear to anyone looking. It’s not just that experts are vulnerable to confirmation bias—it’s that they’re especially vulnerable to it.
Читать дальше