At its best, media help mitigate present bias, mixing “should” stories with “want” stories and encouraging us to dig into the difficult but rewarding work of understanding complex problems. But the filter bubble tends to do the opposite: Because it’s our present self that’s doing all the clicking, the set of preferences it reflects is necessarily more “want” than “should.”
The one-identity problem isn’t a fundamental flaw. It’s more of a bug: Because Zuckerberg thinks you have one identity and you don’t, Facebook will do a worse job of personalizing your information environment. As John Battelle told me, “We’re so far away from the nuances of what it means to be human, as reflected in the nuances of the technology.” Given enough data and enough programmers, the context problem is solvable—and according to personalization engineer Jonathan McPhie, Google is working on it. We’ve seen the pendulum swing from the anonymity of the early Internet to the one-identity view currently in vogue; the future may look like something in between.
But the one-identity problem illustrates one of the dangers of turning over your most personal details to companies who have a skewed view of what identity is. Maintaining separate identity zones is a ritual that helps us deal with the demands of different roles and communities. And something’s lost when, at the end of the day, everything inside your filter bubble looks roughly the same. Your bacchanalian self comes knocking at work; your work anxieties plague you on a night out.
And when we’re aware that everything we do enters a permanent, pervasive online record, another problem emerges: The knowledge that what we do affects what we see and how companies see us can create a chilling effect. Genetic privacy expert Mark Rothstein describes how lax regulations around genetic data can actually reduce the number of people willing to be tested for certain diseases: If you might be discriminated against or denied insurance for having a gene linked to Parkinson’s disease, it’s not unreasonable just to skip the test and the “toxic knowledge” that might result.
In the same way, when our online actions are tallied and added to a record that companies use to make decisions, we might decide to be more cautious in our surfing. If we knew (or even suspected, for that matter) that purchasers of 101 Ways to Fix Your Credit Score tend to get offered lower-premium credit cards, we’d avoid buying the book. “If we thought that our every word and deed were public,” writes law professor Charles Fried, “fear of disapproval or more tangible retaliation might keep us from doing or saying things which we would do or say could we be sure of keeping them to ourselves.” As Google expert Siva Vaidhyanathan points out, “F. Scott Fitzgerald’s enigmatic Jay Gatsby could not exist today. The digital ghost of Jay Gatz would follow him everywhere.”
In theory, the one-identity, context-blind problem isn’t impossible to fix. Personalizers will undoubtedly get better at sensing context. They might even be able to better balance long-term and short-term interests. But when they do—when they are able to accurately gauge the workings of your psyche—things get even weirder.
Targeting Your Weak Spots
The logic of the filter bubble today is still fairly rudimentary: People who bought the Iron Man DVD are likely to buy Iron Man II; people who enjoy cookbooks will probably be interested in cookware. But for Dean Eckles, a doctoral student at Stanford and an adviser to Facebook, these simple recommendations are just the beginning. Eckles is interested in means, not ends: He cares less about what types of products you like than which kinds of arguments might cause you to choose one over another.
Eckles noticed that when buying products—say, a digital camera—different people respond to different pitches. Some people feel comforted by the fact that an expert or product review site will vouch for the camera. Others prefer to go with the product that’s most popular, or a money-saving deal, or a brand that they know and trust. Some people prefer what Eckles calls “high cognition” arguments—smart, subtle points that require some thinking to get. Others respond better to being hit over the head with a simple message.
And while most of us have preferred styles of argument and validation, there are also types of arguments that really turn us off. Some people rush for a deal; others think that the deal means the merchandise is subpar. Just by eliminating the persuasion styles that rub people the wrong way, Eckles found he could increase the effectiveness of marketing materials by 30 to 40 percent.
While it’s hard to “jump categories” in products—what clothing you prefer is only slightly related to what books you enjoy—“persuasion profiling” suggests that the kinds of arguments you respond to are highly transferrable from one domain to another. A person who responds to a “get 20% off if you buy NOW” deal for a trip to Bermuda is much more likely than someone who doesn’t to respond to a similar deal for, say, a new laptop.
If Eckles is right—and research so far appears to be validating his theory—your “persuasion profile” would have a pretty significant financial value. It’s one thing to know how to pitch products to you in a specific domain; it’s another to be able to improve the hit rate anywhere you go. And once a company like Amazon has figured out your profile by offering you different kinds of deals over time and seeing which ones you responded to, there’s no reason it couldn’t then sell that information to other companies. (The field is so new that it’s not clear if there’s a correlation between persuasion styles and demographic traits, but obviously that could be a shortcut as well.)
There’s plenty of good that could emerge from persuasion profiling, Eckles believes. He points to DirectLife, a wearable coaching device by Philips that figures out which arguments get people eating more healthily and exercising more regularly. But he told me he’s troubled by some of the possibilities. Knowing what kinds of appeals specific people respond to gives you power to manipulate them on an individual basis.
With new methods of “sentiment analysis, it’s now possible to guess what mood someone is in. People use substantially more positive words when they’re feeling up; by analyzing enough of your text messages, Facebook posts, and e-mails, it’s possible to tell good days from bad ones, sober messages from drunk ones (lots of typos, for a start). At best, this can be used to provide content that’s suited to your mood: On an awful day in the near future, Pandora might know to preload Pretty Hate Machine for you when you arrive. But it can also be used to take advantage of your psychology.
Consider the implications, for example, of knowing that particular customers compulsively buy things when stressed or when they’re feeling bad about themselves, or even when they’re a bit tipsy. If persuasion profiling makes it possible for a coaching device to shout “you can do it” to people who like positive reinforcement, in theory it could also enable politicians to make appeals based on each voter’s targeted fears and weak spots.
Infomercials aren’t shown in the middle of the night only because airtime then is cheap. In the wee hours, most people are especially suggestible. They’ll spring for the slicer-dicer that they’d never purchase in the light of day. But the three A.M. rule is a rough one—presumably, there are times in all of our daily lives when we’re especially inclined to purchase whatever’s put in front of us. The same data that provides personalized content can be used to allow marketers to find and manipulate your personal weak spots. And this isn’t a hypothetical possibility: Privacy researcher Pam Dixon discovered that a data company called PK List Management offers a list of customers titled “Free to Me—Impulse Buyers”; those listed are described as being highly susceptible to pitches framed as sweepstakes.
Читать дальше