For one thing, there’s the problem of the friendly world. Communications researcher George Gerbner was one of the first theorists to look into how media affect our political beliefs, and in the mid-1970s, he spent a lot of time thinking about shows like Starsky and Hutch . It was a pretty silly program, filled with the shared clichés of seventies cop TV—the bushy moustaches, the twanging soundtracks, the simplistic goodversus-evil plots. And it was hardly the only one—for every Charlie’s Angels or Hawaii Five-O that earned a place in cultural memory, there are dozens of shows, like The Rockford Files, Get Christie Love, and Adam-12, that are unlikely to be resuscitated for ironic twenty-first-century remakes.
But Gerbner, a World War II veteran–turned–communications theorist who became dean of the Annenberg School of Communication, took these shows seriously. Starting in 1969, he began a systematic study of the way TV programming affects how we think about the world. As it turned out, the Starsky and Hutch effect was significant. When you asked TV watchers to estimate the percentage of the adult workforce that was made up of cops, they vastly overguessed relative to non–TV watchers with the same education and demographic background. Even more troubling, kids who saw a lot of TV violence were much more likely to be worried about real-world violence.
Gerbner called this the mean world syndrome: If you grow up in a home where there’s more than, say, three hours of television per day, for all practical purposes, you live in a meaner world—and act accordingly—than your next-door neighbor who lives in the same place but watches less television. “You know, who tells the stories of a culture really governs human behavior,” Gerbner later said.
Gerbner died in 2005, but he lived long enough to see the Internet begin to break that stranglehold. It must have been a relief: Although our online cultural storytellers are still quite consolidated, the Internet at least offers more choice. If you want to get your local news from a blogger rather than a local TV station that trumpets crime rates to get ratings, you can.
But if the mean world syndrome poses less of a risk these days, there’s a new problem on the horizon: We may now face what persuasion-profiling theorist Dean Eckles calls a friendly world syndrome, in which some of the biggest and most important problems fail to reach our view at all.
While the mean world on television arises from a cynical “if it bleeds, it leads” approach to programming, the friendly world generated by algorithmic filtering may not be as intentional. According to Facebook engineer Andrew Bosworth, the team that developed the Like button originally considered a number of options—from stars to a thumbs up sign (but in Iran and Thailand, it’s an obscene gesture). For a month in the summer of 2007, the button was known as the Awesome button. Eventually, however, the Facebook team gravitated toward Like, which is more universal.
That Facebook chose Like instead of, say, Important is a small design decision with far-reaching consequences: The stories that get the most attention on Facebook are the stories that get the most Likes, and the stories that get the most Likes are, well, more likable.
Facebook is hardly the only filtering service that will tend toward an antiseptically friendly world. As Eckles pointed out to me, even Twitter, which has a reputation for putting filtering in the hands of its users, has this tendency. Twitter users see most of the tweets of the folks they follow, but if my friend is having an exchange with someone I don’t follow, it doesn’t show up. The intent is entirely innocuous: Twitter is trying not to inundate me with conversations I’m not interested in. But the result is that conversations between my friends (who will tend to be like me) are overrepresented, while conversations that could introduce me to new ideas are obscured.
Of course, friendly doesn’t describe all of the stories that pierce the filter bubble and shape our sense of the political world. As a progressive political news junkie, I get plenty of news about Sarah Palin and Glenn Beck. The valence of this news, however, is very predictable: People are posting it to signal their dismay with Beck’s and Palin’s rhetoric and to build a sense of solidarity with their friends, who presumably feel the same way. It’s rare that my assumptions about the world are shaken by what I see in my news feed.
Emotional stories are the ones that generally thrive in the filter bubble. The Wharton School study on the New York Times ’s Most Forwarded List, discussed in chapter 2, found that stories that aroused strong feelings—awe, anxiety, anger, happiness—were much more likely to be shared. If television gives us a “mean world,” filter bubbles give us an “emotional world.”
One of the troubling side effects of the friendly world syndrome is that some important public problems will disappear. Few people seek out information about homelessness, or share it, for that matter. In general, dry, complex, slow-moving problems—a lot of the truly significant issues—won’t make the cut. And while we used to rely on human editors to spotlight these crucial problems, their influence is now waning.
Even advertising isn’t necessarily a foolproof way of alerting people to public problems, as the environmental group Oceana found out. In 2004, Oceana was running a campaign urging Royal Caribbean to stop dumping its raw sewage into the sea; as part of the campaign, it took out a Google ad that said “Help us protect the world’s oceans. Join the fight!” After two days, Google pulled the ads, citing “language advocating against the cruise line industry” that was in violation of their general guidelines about taste. Apparently, advertisers that implicated corporations in public issues weren’t welcome.
The filter bubble will often block out the things in our society that are important but complex or unpleasant. It renders them invisible. And it’s not just the issues that disappear. Increasingly, it’s the whole political process.
When George W. Bush came out of the 2000 election with far fewer votes than Karl Rove expected, Rove set in motion a series of experiments in microtargeted media in Georgia—looking at a wide range of consumer data (“Do you prefer beer or wine?”) to try to predict voting behavior and identify who was persuadable and who could be easily motivated to get to the polls. Though the findings are still secret, legend has it that the methods Rove discovered were at the heart of the GOP’s successful get-out-the-vote strategy in 2002 and 2004.
On the left, Catalist, a firm staffed by former Amazon engineers, has built a database of hundreds of millions of voter profiles. For a fee, organizing and activist groups (including MoveOn) query it to help determine which doors to knock on and to whom to run ads. And that’s just the start. In a memo for fellow progressives, Mark Steitz, one of the primary Democratic data gurus, recently wrote that “targeting too often returns to a bombing metaphor—dropping message from planes. Yet the best data tools help build relationships based on observed contacts with people. Someone at the door finds out someone is interested in education; we get back to that person and others like him or her with more information. Amazon’s recommendation engine is the direction we need to head.” The trend is clear: We’re moving from swing states to swing people.
Consider this scenario: It’s 2016, and the race is on for the presidency of the United States. Or is it?
It depends on who you are, really. If the data says you vote frequently and that you may have been a swing voter in the past, the race is a maelstrom. You’re besieged with ads, calls, and invitations from friends. If you vote intermittently, you get a lot of encouragement to get out to the polls.
Читать дальше