By the mid-1990s, there were hundreds of channels streaming out live programming twenty-four hours a day, seven days a week. Most of the programming was horrendous and boring: infomercials for new kitchen gadgets, music videos for the latest one-hit-wonder band, cartoons, and celebrity news. For any given viewer, only a tiny percentage of it was likely to be interesting.
As the number of channels increased, the standard method of surfing through them was getting more and more hopeless. It’s one thing to search through five channels. It’s another to search through five hundred. And when the number hits five thousand—well, the method’s useless.
But Negroponte wasn’t worried. All was not lost: in fact, a solution was just around the corner. “The key to the future of television,” he wrote, “is to stop thinking about television as television,” and to start thinking about it as a device with embedded intelligence. What consumers needed was a remote control that controls itself, an intelligent automated helper that would learn what each viewer watches and capture the programs relevant to him or her. “Today’s TV set lets you control brightness, volume, and channel,” Negroponte typed. “Tomorrow’s will allow you to vary sex, violence, and political leaning.”
And why stop there? Negroponte imagined a future swarming with intelligent agents to help with problems like the TV one. Like a personal butler at a door, the agents would let in only your favorite shows and topics. “Imagine a future,” Negroponte wrote, “in which your interface agent can read every newswire and newspaper and catch every TV and radio broadcast on the planet, and then construct a personalized summary. This kind of newspaper is printed in an edition of one…. Call it the Daily Me.”
The more he thought about it, the more sense it made. The solution to the information overflow of the digital age was smart, personalized, embedded editors. In fact, these agents didn’t have to be limited to television; as he suggested to the editor of the new tech magazine Wired , “Intelligent agents are the unequivocal future of computing.”
In San Francisco, Jaron Lanier responded to this argument with dismay. Lanier was one of the creators of virtual reality; since the eighties, he’d been tinkering with how to bring computers and people together. But the talk of agents struck him as crazy. “What’s got into all of you?” he wrote in a missive to the “Wired-style community” on his Web site. “The idea of ‘intelligent agents’ is both wrong and evil…. The agent question looms as a deciding factor in whether [the Net] will be much better than TV, or much worse.”
Lanier was convinced that, because they’re not actually people, agents would force actual humans to interact with them in awkward and pixelated ways. “An agent’s model of what you are interested in will be a cartoon model, and you will see a cartoon version of the world through the agent’s eyes,” he wrote.
And there was another problem: The perfect agent would presumably screen out most or all advertising. But since online commerce was driven by advertising, it seemed unlikely that these companies would roll out agents who would do such violence to their bottom line. It was more likely, Lanier wrote, that these agents would have double loyalties—bribable agents. “It’s not clear who they’re working for.”
It was a clear and plangent plea. But though it stirred up some chatter in online newsgroups, it didn’t persuade the software giants of this early Internet era. They were convinced by Negroponte’s logic: The company that figured out how to sift through the digital haystack for the nuggets of gold would win the future. They could see the attention crash coming, as the information options available to each person rose toward infinity. If you wanted to cash in, you needed to get people to tune in. And in an attention-scarce world, the best way to do that was to provide content that really spoke to each person’s idiosyncratic interests, desires, and needs. In the hallways and data centers of Silicon Valley, there was a new watchword: relevance.
Everyone was rushing to roll out an “intelligent” product. In Redmond, Microsoft released Bob—a whole operating system based on the agent concept, anchored by a strange cartoonish avatar with an uncanny resemblance to Bill Gates. In Cupertino, almost exactly a decade before the iPhone, Apple introduced the Newton, a “personal desktop assistant” whose core selling point was the agent lurking dutifully just under its beige surface.
As it turned out, the new intelligent products bombed. In chat groups and on e-mail lists, there was practically an industry of snark about Bob. Users couldn’t stand it. PC World named it one of the twenty-five worst tech products of all time. And the Apple Newton didn’t do much better: Though the company had invested over $100 million in developing the product, it sold poorly in the first six months of its existence. When you interacted with the intelligent agents of the midnineties, the problem quickly became evident: They just weren’t that smart.
Now, a decade and change later, intelligent agents are still nowhere to be seen. It looks as though Negroponte’s intelligent-agent revolution failed. We don’t wake up and brief an e-butler on our plans and desires for the day.
But that doesn’t mean they don’t exist. They’re just hidden. Personal intelligent agents lie under the surface of every Web site we go to. Every day, they’re getting smarter and more powerful, accumulating more information about who we are and what we’re interested in. As Lanier predicted, the agents don’t work only for us: They also work for software giants like Google, dispatching ads as well as content. Though they may lack Bob’s cartoon face, they steer an increasing proportion of our online activity.
In 1995 the race to provide personal relevance was just beginning. More than perhaps any other factor, it’s this quest that has shaped the Internet we know today.
Jeff Bezos, the CEO of Amazon.com, was one of the first people to realize that you could harness the power of relevance to make a few billion dollars. Starting in 1994, his vision was to transport online bookselling “back to the days of the small bookseller who got to know you very well and would say things like, ‘I know you like John Irving, and guess what, here’s this new author, I think he’s a lot like John Irving,’” he told a biographer. But how to do that on a mass scale? To Bezos, Amazon needed to be “a sort of a small Artificial Intelligence company,” powered by algorithms capable of instantly matching customers and books.
In 1994, as a young computer scientist working for Wall Street firms, Bezos had been hired by a venture capitalist to come up with business ideas for the burgeoning Web space. He worked methodically, making a list of twenty products the team could theoretically sell online—music, clothing, electronics—and then digging into the dynamics of each industry. Books started at the bottom of his list, but when he drew up his final results, he was surprised to find them at the top.
Books were ideal for a few reasons. For starters, the book industry was decentralized; the biggest publisher, Random House, controlled only 10 percent of the market. If one publisher wouldn’t sell to him, there would be plenty of others who would. And people wouldn’t need as much time to get comfortable with buying books online as they might with other products—a majority of book sales already happened outside of traditional bookstores, and unlike clothes, you didn’t need to try them on. But the main reason books seemed attractive was simply the fact that there were so many of them—3 million active titles in 1994, versus three hundred thousand active CDs. A physical bookstore would never be able to inventory all those books, but an online bookstore could.
Читать дальше