As Google’s protobrain increases in sophistication, it’ll open up remarkable new possibilities. Researchers in Indonesia can benefit from the latest papers in Stanford (and vice versa) without waiting for translation delays. In a matter of a few years, it may be possible to have an automatically translated voice conversation with someone speaking a different language, opening up whole new channels of cross-cultural communication and understanding.
But as these systems become increasingly “intelligent,” they also become harder to control and understand. It’s not quite right to say they take on a life of their own—ultimately, they’re still just code. But they reach a level of complexity at which even their programmers can’t fully explain any given output.
This is already true to a degree with Google’s search algorithm. Even to its engineers, the workings of the algorithm are somewhat mysterious. “If they opened up the mechanics,” says search expert Danny Sullivan, “you still wouldn’t understand it. Google could tell you all two hundred signals it uses and what the code is and you wouldn’t know what to do with them.” The core software engine of Google search is hundreds of thousands of lines of code. According to one Google employee I talked to who had spoken to the search team, “The team tweaks and tunes, they don’t really know what works or why it works, they just look at the result.”
Google promises that it doesn’t tilt the deck in favor of its own products. But the more complex and “intelligent” the system gets, the harder it’ll be to tell. Pinpointing where bias or error exists in a human brain is difficult or impossible—there are just too many neurons and connections to narrow it down to a single malfunctioning chunk of tissue. And as we rely on intelligent systems like Google’s more, their opacity could cause real problems—like the still-mysterious machine-driven “flash crash” that caused the Dow to drop 600 points in a few minutes on May 6, 2010.
In a provocative article in Wired, editor-in-chief Chris Anderson argued that huge databases render scientific theory itself obsolete. Why spend time formulating human-language hypotheses, after all, when you can quickly analyze trillions of bits of data and find the clusters and correlations? He quotes Peter Norvig, Google’s research director: “All models are wrong, and increasingly you can succeed without them.” There’s plenty to be said for this approach, but it’s worth remembering the downside: Machines may be able to see results without models, but humans can’t understand without them. There’s value in making the processes that run our lives comprehensible to the humans who, at least in theory, are their beneficiaries.
Supercomputer inventor Danny Hillis once said that the greatest achievement of human technology is tools that allow us to create more than we understand. That’s true, but the same trait is also the source of our greatest disasters. The more the code driving personalization comes to resemble the complexity of human cognition, the harder it’ll be to understand why or how it’s making the decisions it makes. A simple coded rule that bars people from one group or class from certain kinds of access is easy to spot, but when the same action is the result of a swirling mass of correlations in a global supercomputer, it’s a trickier problem. And the result is that it’s harder to hold these systems and their tenders accountable for their actions.
No Such Thing as a Free Virtual Lunch
In January 2009, if you were listening to one of twenty-five radio stations in Mexico, you might have heard the accordion ballad “El más grande enemigo.” Though the tune is polka-ish and cheery, the lyrics depict a tragedy: a migrant seeks to illegally cross the border, is betrayed by his handler, and is left in the blistering desert sun to die. Another song from the Migra corridos album tells a different piece of the same sad tale:
To cross the border
I got in the back of a trailer
There I shared my sorrows
With forty other immigrants
I was never told
That this was a trip to hell.
If the lyrics aren’t exactly subtle about the dangers of crossing the border, that’s the point. Migra corridos was produced by a contractor working for the U.S. Border Control, as part of a campaign to stem the tide of immigrants along the border. The song is a prime example of a growing trend in what marketers delicately call “advertiser-funded media,” or AFM.
Product placement has been in vogue for decades, and AFM is its natural next step. Advertisers love product placement because in a media environment in which it’s harder and harder to get people to pay attention to anything—especially ads—it provides a kind of loophole. You can’t fast-forward past product placement. You can’t miss it without missing some of the actual content. AFM is just a natural extension of the same logic: Media have always been vehicles for selling products, the argument goes, so why not just cut out the middleman and have product makers produce the content themselves?
In 2010, Walmart and Procter & Gamble announced a partnership to produce Secrets of the Mountain and The Jensen Project, family movies that will feature characters using the companies’ products throughout. Michael Bay, the director of Transformers, has started a new company called the Institute, whose tagline is “Where Brand Science Meets Great Storytelling.” Hansel and Gretel in 3-D, its first feature production, will be specially crafted to provide product-placement hooks throughout.
Now that the video-game industry is far more profitable than the movie industry, it provides a huge opportunity for in game advertising and product placement as well. Massive Incorporated, a game advertising platform acquired by Microsoft for $200 million to $400 million, has placed ads on in game billboards and city walls for companies like Cingular and McDonald’s, and has the capacity to track which individual users saw which advertisements for how long. Splinter Cell, a game by UBIsoft, works placement for products like Axe deodorant into the architecture of the cityscape that characters travel through.
Even books aren’t immune. Cathy’s Book , a young-adult title published in September 2006, has its heroine applying “a killer coat of Lipslicks in ‘Daring.’”That’s not a coincidence— Cathy’s Book was published by Procter & Gamble, the corporate owner of Lipslicks.
If the product placement and advertiser-funded media industries continue to grow, personalization will offer whole new vistas of possibility. Why name-drop Lipslicks when your reader is more likely to buy Cover Girl? Why have a video-game chase scene through Macy’s when the guy holding the controller is more of an Old Navy type? When software engineers talk about architecture, they’re usually talking metaphorically. But as people spend more of their time in virtual, personalizable places, there’s no reason that these worlds can’t change to suit users’ preferences. Or, for that matter, a corporate sponsor’s.
The enriched psychological models and new data flows measuring everything from heart rate to music choices open up new frontiers for online personalization, in which what changes isn’t just a choice of products or news clips, but the look and feel of the site on which they’re displayed.
Why should Web sites look the same to every viewer or customer? Different people don’t respond only to different products—they respond to different design sensibilities, different colors, even different types of product descriptions. It’s easy enough to imagine a Walmart Web site with softened edges and warm pastels for some customers and a hard-edged, minimalist design for others. And once that capacity exists, why stick with just one design per customer? Maybe it’s best to show me one side of the Walmart brand when I’m angry and another when I’m happy.
Читать дальше