EVGENY MOROZOV
Contributing editor, The New Republic ; syndicated columnist; author, To Save Everything, Click Here: The Folly of Technological Solutionism
I worry that as the problem-solving power of our technologies increases, our ability to distinguish between important and trivial or even nonexistent problems diminishes. Just because we have “smart” solutions to fix every single problem under the sun doesn’t mean that all of those problems deserve our attention. In fact, some of them may not be problems at all; that certain social and individual situations are awkward, imperfect, noisy, opaque, or risky might be by design. Or, as the geeks like to say, some bugs are not bugs, some bugs are features.
I find myself preoccupied with the invisible costs of “smart” solutions in part because Silicon Valley mavericks are not lying to us: Technologies are becoming not only more powerful but also more ubiquitous. We used to think that, somehow, digital technologies lived in a national reserve of some kind; first we called this imaginary place “cyberspace” and then we switched to the more neutral label of “Internet.” It’s only in the last few years, with the proliferation of geolocational services, self-driving cars, and smart glasses, that we grasped that such national reserves were perhaps a myth and that digital technologies would be everywhere: in our fridges, on our belts, in our books, in our trash bins.
All this smart awesomeness will make our environment more plastic and more programmable. It will also tempt us to design out all imperfections—just because we can!—from our interactions, social institutions, politics. Why have an expensive law enforcement system, if we can design smart environments where no crimes are committed simply because those people deemed “risky”—based, no doubt, on their online profiles—are barred from access and thus unable to commit crimes in the first place? So we are faced with a dilemma: Do we want some crime or no crime? What would we lose—as a democracy—in a world without crime? Would our debate suffer, as the media and courts would no longer review the legal cases? This is an important question that I’m afraid Silicon Valley, with its penchant for efficiency and optimization, might not get right.
Or take another example: If, through the right combination of reminders, nudges, and virtual badges, we can get people to be “perfect citizens”—recycle, show up at elections, care about urban infrastructure—should we take advantage of the possibilities offered by smart technologies? Or should we, perhaps, accept that slacking off and idleness, in small doses, are productive in that they create spaces and openings where citizens can still be appealed to by deliberation and moral argument, not just the promise of a better shopping discount courtesy of their smartphone app?
If problem solvers can get you to recycle via a game, would they even bother with the less effective path of engaging you in moral reasoning? The difference is that those people earning points in a game might end up not knowing anything about the “problem” they were solving, while those who had been through the argument would have a tiny chance of grasping the issue’s complexity and doing something that would matter in the years to come, not just today.
Alas, smart solutions don’t translate into smart problem solvers. In fact, the opposite might be true: Blinded by the awesomeness of our tools, we might forget that some problems and imperfections are just the normal costs of accepting the social contract of living with other human beings, treating them with dignity, and ensuring that, in our recent pursuit of a perfect society, we do not shut the door to change. Change usually happens in rambunctious, chaotic, and imperfectly designed environments; sterile environments, where everyone is content, are not known for innovation, of either the technological or the social variety. When it comes to smart technologies, there’s such a thing as too “smart,” and it isn’t pretty.
THE STIFLING OF TECHNOLOGICAL PROGRESS
DAVID PIZARRO
Associate professor of psychology, Cornell University
It is increasingly clear that human intuitions—particularly our social and moral intuitions—are ill equipped to deal with the rapid pace of technological innovation. We should be worried that this will hamper the adoption of technologies that might otherwise be of practical benefit to individuals and great benefit to society. Here’s an example: My e-mail provider has long been able to generate targeted advertisements based on the content of my e-mail. But it can now also suggest a calendar entry for an upcoming appointment mentioned in an e-mail, track my location as the appointment approaches, alert me about when to leave, and initiate driving directions to get me there on time.
It feels natural to say that Google “reads my e-mail” and that it “knows where I have to be.” We can’t help but interpret this automated information through the lens of our social intuitions, and we end up perceiving agency and intentionality where there is none. So even if we know that no human eyes have seen our e-mails, it can still feel, well, creepy—as if we’re not quite sure that there isn’t someone going through our stuff, following us around, and possibly talking about us behind our back. Unsurprisingly, many view these services as a violation of privacy, even when there’s no agent doing the “violating.” The adoption of such technologies has suffered for these reasons.
These social intuitions interfere with the adoption of technologies offering more than mere convenience. For instance, the technology for self-driving cars exists now and promises that thousands of lives may be saved each year because of reduced traffic collisions. But the technology depends fundamentally on the ability to track one’s precise location at all times. This is just creepy enough that a lot of people will likely avoid the technology and opt for the riskier option of driving themselves.
Of course, we’re not necessarily at the whims of our psychological intuitions. Given enough time we can (and do) learn to set them aside when necessary. However, I doubt we can do so quickly enough to match the current speed of technological innovation.
THE RISE OF ANTI-INTELLECTUALISM AND THE END OF PROGRESS
TIM O’REILLY
Founder and CEO of O’Reilly Media
For many in the techno-elite, even those who don’t entirely subscribe to the unlimited optimism of the Singularity, the notion of perpetual progress and economic growth is somehow taken for granted. As a former classicist turned technologist, I’ve lived with the shadow of the fall of Rome, the failure of its intellectual culture, and the stasis that gripped the Western world for the better part of 1,000 years. What I fear most is that we will lack the will and foresight to face the world’s problems squarely and will instead retreat from them into superstition and ignorance.
Consider how in A.D. 375, after a dream in which he was whipped for being a “Ciceronian” rather than a Christian, St. Jerome resolved to abandon the classical authors and restrict himself to Christian texts, and how in A.D. 415 the Christians of Alexandria murdered the philosopher and mathematician Hypatia—and realize that, at least in part, the Dark Ages were not something imposed from without, a breakdown of civilization due to barbarian invasions, but a choice, a turning away from knowledge and discovery into a kind of religious fundamentalism. Now consider how conservative elements in American religion and politics refuse to accept scientific knowledge and deride their opponents for being “reality based,” and ask yourself, “Could that ideology come to rule the most powerful nation on Earth? And if it did, what would be the consequences for the world?”
Читать дальше