I have little doubt that this can happen: Our brains are a bunch of particles obeying the laws of physics, and there’s no physical law precluding particles from being arranged in ways that can perform even more advanced computations.
But will it happen anytime soon? Many experts are skeptical, while others, such as Ray Kurzweil, predict it will happen by 2030. What I think is quite clear, however, is that if it happens, the effects will be explosive. As the late Oxford mathematician Irving J. Good realized in 1965 (“Speculations Concerning the First Ultraintelligent Machine”), machines with superhuman intelligence could rapidly design even better machines. In 1993, mathematician and science-fiction author Vernor Vinge called the resulting intelligence explosion “The Singularity,” arguing that it was a point beyond which it was impossible for us to make reliable predictions. After this, life on Earth would never be the same, either objectively or subjectively.
Objectively, whoever or whatever controls this technology would rapidly become the world’s wealthiest and most powerful entity, outsmarting all financial markets, outinventing and outpatenting all human researchers, and outmanipulating all human leaders. Even if we humans nominally merge with such machines, we might have no guarantees about the ultimate outcome, making it feel less like a merger and more like a hostile corporate takeover.
Subjectively, these machines wouldn’t feel as we do. Would they feel anything at all? I believe that consciousness is the way information feels when being processed. I therefore think it’s likely that they, too, would feel self-aware and should be viewed not as mere lifeless machines but as conscious beings like us—but with a consciousness that subjectively feels quite different from ours.
For example, they would probably lack our human fear of death. As long as they’ve backed themselves up, all they stand to lose are the memories they’ve accumulated since their latest backup. The ability to readily copy information and software between AIs would probably reduce the strong sense of individuality so characteristic of human consciousness: There would be less of a distinction between you and me if we could trivially share and copy all our memories and abilities. So a group of nearby AIs may feel more like a single organism with a hive mind.
In summary, will there be a Singularity within our lifetime? And is this something we should work for or against? On the one hand, it might solve most of our problems, even mortality. It could also open up space, the final frontier. Unshackled by the limitations of our human bodies, such advanced life could rise up and eventually make much of our observable universe come alive. On the other hand, it could destroy life as we know it and everything we care about.
We’re nowhere near consensus on either of these two questions, but that doesn’t mean it’s rational for us to do nothing about the issue. It could be the best or worst thing ever to happen to life as we know it, so if there’s even a 1-percent chance that there will be a Singularity in our lifetime, a reasonable precaution would be to spend at least 1 percent of our GDP studying the issue and deciding what to do about it. Yet we largely ignore it and are curiously complacent about life as we know it getting transformed. What we should be worried about is that we’re not worried.
“THE SINGULARITY”: THERE’S NO THERE THERE
BRUCE STERLING
Futurist, science fiction author, journalist, critic; author, Love Is Strange (A Paranormal Romance)
Twenty years have passed since Vernor Vinge wrote his remarkably interesting essay about the Singularity.
This aging sci-fi notion has lost its conceptual teeth. Plus, its chief evangelist, visionary Ray Kurzweil, recently got a straight engineering job with Google. Despite its weird fondness for AR goggles and self-driving cars, Google is not going to finance any eschatological cataclysm in which superhuman intelligence abruptly ends the human era. Google is a firmly commercial enterprise.
It’s just not happening. All the symptoms are absent. Computer hardware is not accelerating on any exponential runway beyond all hope of control. We’re no closer to self-aware machines than we were in the remote 1960s. Modern wireless devices in a modern cloud are an entirely different cyberparadigm than imaginary 1990s “minds on nonbiological substrates” that might allegedly have the “computational power of a human brain.” A Singularity has no business model, no major power group in our society is interested in provoking one, nobody who matters sees any reason to create one, there’s no there there.
So, as a pope once remarked, “Be not afraid.” We’re getting what Vinge predicted would happen without a Singularity, which is “a glut of technical riches never properly absorbed.” There’s all kinds of mayhem in that junkyard, but the AI Rapture isn’t lurking in there. It’s no more to be fretted about than a landing of Martian tripods.
CHARLES SEIFE
Professor of journalism, NYU; former staff writer, Science ; author, Proofiness: The Dark Arts of Mathematical Deception
On April 5, 2010, deep in the Upper Big Branch mine in West Virginia, a spark ignited a huge explosion that rumbled through the tunnels and killed twenty-nine miners—the worst mining disaster in the United States in forty years. Two weeks later, the Deepwater Horizon, a drilling rig in the Gulf of Mexico, went up in flames, killing eleven workers and creating the biggest oil spill in history. Though these two disasters seem completely unrelated, they had the same underlying cause: capture.
Federal agencies that regulate industry are supposed to prevent such disasters. Agencies like the Mine Safety and Health Administration (which sets the rules for mines) and the Minerals Management Service (which set the rules for offshore drilling) are supposed to constrain businesses—and to act as watchdogs—to force everyone to play by the rules. That’s the ideal, anyhow. The reality is a bit messier. More often than not, the agencies are reluctant to enforce the regulations they create. When a business gets caught breaking the rules, the regulatory agencies tend to impose penalties amounting to no more than a slap on the wrist. Companies like Massey Energy (which ran Upper Big Branch) and BP (which ran the Deepwater Horizon) flout the rules, and when disaster strikes, everybody wonders why regulators failed to take action despite numerous warning signs and repeated violations of regulations.
In the 1970s, economists, led by future Nobel laureate George Stigler, began to realize that this was the rule, not the exception. Over time, regulatory agencies are systematically drained of their ability to check the power of industry. Even more striking, they’re gradually drawn into the orbit of the businesses they’re charged with regulating. Instead of acting in the public interest, the regulators wind up as tools of the industry they’re supposed to keep watch over. This process, known as “regulatory capture,” turns regulators from watchdogs into lapdogs.
You don’t have to look far to see regulatory capture in action. Securities and Exchange Commission officials are often accused of ignoring warnings about fraud, stifling investigations, even helping miscreants avoid paying big fines or going to jail. Look at the Nuclear Regulatory Commission’s enforcement reports to see how capable it is of preventing energy companies from violating nuclear power plant safety rules again and again. Regulatory capture isn’t limited to the U.S. What caused the Fukushima disaster? Ultimately it was a “breakdown of the regulatory system” caused by “reversal of the positions between the regulator and the regulated,” at least according to a report prepared by the Japanese parliament. The regulator had become the regulated.
Читать дальше