Looking into the future, Coyne told me, you’ll have people walking around with augmented displays. He described a guy on a night out: You walk into a bar, and a camera immediately scans the faces in the room and matches them against OkCupid’s databases. “Your accessories can say, that girl over there is an eighty-eight percent match. That’s a dream come true!”
Vladimir Nabokov once commented that “reality” is “one of the few words that mean nothing without quotes.” Coyne’s vision may soon be our “reality.” There’s tremendous promise in this vision: Surgeons who never miss a suture, soldiers who never imperil civilians, and everywhere a more informed, information-dense world. But there’s also danger: Augmented reality represents the end of naive empiricism, of the world as we see it, and the beginning of something far more mutable and weird: a real-world filter bubble that will be increasingly difficult to escape.
There’s plenty to love about this ubiquitously personalized future.
Smart devices, from vacuum cleaners to lightbulbs to picture frames, offer the promise that our environments will be exactly the way we want them, wherever we are. In the near future, ambient-intelligence expert David Wright suggests, we might even carry our room-lighting preferences with us; when there are multiple people in a room, a consensus could be automatically reached by averaging preferences and weighting for who’s the host.
AugCog-enabled devices will help us track the data streams that we consider most important. In some situations—say, medical or fire alerts that find ways to escalate until they capture our attention—they could save lives. And while brainwave-reading AugCog is probably some way off for the masses, consumer variants of the basic concept are already being put into place. Google’s Gmail Priority Inbox, which screens e-mails and highlights the ones it assesses as more important, is an early riff on the theme. Meanwhile, augmented-reality filters offer the possibility of an annotated and hyperlinked reality, in which what we see is infused with information that allows us to work better, assimilate information more quickly, and make better decisions.
That’s the good side. But there’s always a bargain in personalization: In exchange for convenience, you hand over some privacy and control to the machine.
As personal data become more and more valuable, the behavioral data market described in chapter 1 is likely to explode. When a clothing company determines that knowing your favorite color produces a $5 increase in sales, it has an economic basis for pricing that data point—and for other Web sites to find reasons to ask you. (While OkCupid is mum about its business model, it likely rests on offering advertisers the ability to target its users based on the hundreds of personal questions they answer.)
While many of these data acquisitions will be legitimate, some won’t be. Data are uniquely suited to gray-market activities, because they need not carry any trace of where they have come from or where they have been along the way. Wright calls this data laundering, and it’s already well under way: Spyware and spam companies sell questionably derived data to middlemen, who then add it to the databases powering the marketing campaigns of major corporations.
Moreover, because the transformations applied to your data are often opaque, it’s not always clear exactly what decisions are being made on your behalf, by whom, or to what end. This matters plenty when we’re talking about information streams, but it matters even more when this power is infused into our sensory apparatus itself.
In 2000, Bill Joy, the Sun Microsystems cofounder, wrote a piece for Wired magazine titled “Why the Future Doesn’t Need Us.” “As society and the problems that face it become more and more complex and machines become more and more intelligent,” he wrote, “people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones.”
That may often be the case: Machine-driven systems do provide significant value. The whole promise of these technologies is that they give us more freedom and more control over our world—lights that respond to our whims and moods, screens and overlays that allow us to attend only to the people we want to, so that we don’t have to do the busywork of living. The irony is that they offer this freedom and control by taking it away. It’s one thing when a remote control’s array of buttons elides our ability to do something basic like flip the channels. It’s another thing when what the remote controls is our lives.
It’s fair to guess that the technology of the future will work about as well as the technology of the past—which is to say, well enough, but not perfectly. There will be bugs. There will be dislocations and annoyances. There will be breakdowns that cause us to question whether the whole system was worth it in the first place. And we’ll live with the threat that systems made to support us will be turned against us—that a clever hacker who cracks the baby monitor now has a surveillance device, that someone who can interfere with what we see can expose us to danger. The more power we have over our own environments, the more power someone who assumes the controls has over us.
That is why it’s worth keeping the basic logic of these systems in mind: You don’t get to create your world on your own. You live in an equilibrium between your own desires and what the market will bear. And while in many cases this provides for healthier, happier lives, it also provides for the commercialization of everything—even of our sensory apparatus itself. There are few things uglier to contemplate than AugCog-enabled ads that escalate until they seize control of your attention.
We’re compelled to return to Jaron Lanier’s question: For whom do these technologies work? If history is any guide, we may not be the primary customer. And as technology gets better and better at directing our attention, we need to watch closely what it is directing our attention toward.
8
Escape from the City of Ghettos
In order to find his own self, [a person] also needs to live in a milieu where the possibility of many different value systems is explicitly recognized and honored. More specifically, he needs a great variety of choices so that he is not misled about the nature of his own person.
—Christopher Alexander et al.,
A Pattern Language
In theory, there’s never been a structure more capable of allowing all of us to shoulder the responsibility for understanding and managing our world than the Internet. But in practice, the Internet is headed in a different direction. Sir Tim Berners-Lee, the creator of the World Wide Web, captured the gravity of this threat in a recent call to arms in the pages of Scientific American titled “Long Live the Web.” “The Web as we know it,” he wrote, “is being threatened…. Some of its most successful inhabitants have begun to chip away at its principles. Large social-networking sites are walling off information posted by their users from the rest of the Web…. Governments—totalitarian and democratic alike—are monitoring people’s online habits, endangering important human rights. If we, the Web’s users, allow these and other trends to proceed unchecked, the Web could be broken into fragmented islands.”
In this book, I’ve argued that the rise of pervasive, embedded filtering is changing the way we experience the Internet and ultimately the world. At the center of this transformation is the fact that for the first time it’s possible for a medium to figure out who you are, what you like, and what you want. Even if the personalizing code isn’t always spot-on, it’s accurate enough to be profitable, not just by delivering better ads but also by adjusting the substance of what we read, see, and hear.
Читать дальше