Giulio and his collaborators have measured a simplified version of Φ by using EEG to measure the brain’s response to magnetic stimulation. Their “consciousness detector” works really well: it determined that patients were conscious when they were awake or dreaming, but unconscious when they were anesthetized or in deep sleep. It even discovered consciousness in two patients suffering from “locked-in” syndrome, who couldn’t move or communicate in any normal way.19 So this is emerging as a promising technology for doctors in the future to figure out whether certain patients are conscious or not.
Anchoring Consciousness in Physics
IIT is defined only for discrete systems that can be in a finite number of states, for example bits in a computer memory or oversimplified neurons that can be either on or off. This unfortunately means that IIT isn’t defined for most traditional physical systems, which can change continuously—for example, the position of a particle or the strength of a magnetic field can take any of an infinite number of values.20 If you try to apply the IIT formula to such systems, you’ll typically get the unhelpful result that Φ is infinite. Quantum-mechanical systems can be discrete, but the original IIT isn’t defined for quantum systems. So how can we anchor IIT and other information-based consciousness theories on a solid physical foundation?
We can do this by building on what we learned in chapter 2 about how clumps of matter can have emergent properties that are related to information. We saw that for something to be usable as a memory device that can store information, it needs to have many long-lived states. We also saw that being computronium, a substance that can do computations, in addition requires complex dynamics: the laws of physics need to make it change in ways that are complicated enough to be able to implement arbitrary information processing. Finally, we saw how a neural network, for example, is a powerful substrate for learning because, simply by obeying the laws of physics, it can rearrange itself to get better and better at implementing desired computations. Now we’re asking an additional question: What makes a blob of matter able to have a subjective experience? In other words, under what conditions will a blob of matter be able to do these four things?
1. remember
2. compute
3. learn
4. experience
We explored the first three in chapter 2, and are now tackling the fourth. Just as Margolus and Toffoli coined the term computronium for a substance that can perform arbitrary computations, I like to use the term sentronium for the most general substance that has subjective experience (is sentient). *5
But how can consciousness feel so non-physical if it’s in fact a physical phenomenon? How can it feel so independent of its physical substrate? I think it’s because it is rather independent of its physical substrate, the stuff in which it is a pattern! We encountered many beautiful examples of substrate-independent patterns in chapter 2, including waves, memories and computations. We saw how they weren’t merely more than their parts (emergent), but rather independent of their parts, taking on a life of their own. For example, we saw how a future simulated mind or computer-game character would have no way of knowing whether it ran on Windows, Mac OS, an Android phone or some other operating system, because it would be substrate-independent. Nor could it tell whether the logic gates of its computer were made of transistors, optical circuits or other hardware. Or what the fundamental laws of physics are—they could be anything as long as they allow the construction of universal computers.
In summary, I think that consciousness is a physical phenomenon that feels non-physical because it’s like waves and computations: it has properties independent of its specific physical substrate. This follows logically from the consciousness-as-information idea. This leads to a radical idea that I really like: If consciousness is the way that information feels when it’s processed in certain ways, then it must be substrate-independent; it’s only the structure of the information processing that matters, not the structure of the matter doing the information processing. In other words, consciousness is substrate-independent twice over!
As we’ve seen, physics describes patterns in spacetime that correspond to particles moving around. If the particle arrangements obey certain principles, they give rise to emergent phenomena that are pretty independent of the particle substrate, and have a totally different feel to them. A great example of this is information processing, in computronium. But we’ve now taken this idea to another level: If the information processing itself obeys certain principles, it can give rise to the higher-level emergent phenomenon that we call consciousness. This places your conscious experience not one but two levels up from the matter. No wonder your mind feels non-physical!
This raises a question: What are these principles that information processing needs to obey to be conscious? I don’t pretend to know what conditions are sufficient to guarantee consciousness, but here are four necessary conditions that I’d bet on and have explored in my research: Principle DefinitionInformation principle A conscious system has substantial information-storage capacity. Dynamics principle A conscious system has substantial information-processing capacity. Independence principle A conscious system has substantial independence from the rest of the world. Integration principle A conscious system cannot consist of nearly independent parts.
As I said, I think that consciousness is the way information feels when being processed in certain ways. This means that to be conscious, a system needs to be able to store and process information, implying the first two principles. Note that the memory doesn’t need to last long: I recommend watching this touching video of Clive Wearing, who appears perfectly conscious even though his memories last less than a minute.21 I think that a conscious system also needs to be fairly independent from the rest of the world, because otherwise it wouldn’t subjectively feel that it had any independent existence whatsoever. Finally, I think that the conscious system needs to be integrated into a unified whole, as Giulio Tononi argued, because if it consisted of two independent parts, then they would feel like two separate conscious entities, rather than one. The first three principles imply autonomy: that the system is able to retain and process information without much outside interference, hence determining its own future. All four principles together mean that a system is autonomous but its parts aren’t.
If these four principles are correct, then we have our work cut out for us: we need to look for mathematically rigorous theories that embody them and test them experimentally. We also need to determine whether additional principles are needed. Regardless of whether IIT is correct or not, researchers should try to develop competing theories and test all available theories with ever better experiments.
Controversies of Consciousness
We’ve already discussed the perennial controversy about whether consciousness research is unscientific nonsense and a pointless waste of time. In addition, there are recent controversies at the cutting edge of consciousness research—let’s explore the ones that I find most enlightening.
Giulio Tononi’s IIT has lately drawn not merely praise but also criticism, some of which has been scathing. Scott Aaronson recently had this to say on his blog: “In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy and malleable that they can only aspire to wrongness.”22 To the credit of both Scott and Giulio, they never came to blows when I watched them debate IIT at a recent New York University workshop, and they politely listened to each other’s arguments. Aaronson showed that certain simple networks of logic gates had extremely high integrated information (Φ) and argued that since they clearly weren’t conscious, IIT was wrong. Giulio countered that if they were built, they would be conscious, and that Scott’s assumption to the contrary was anthropocentrically biased, much as if a slaughterhouse owner claimed that animals couldn’t be conscious just because they couldn’t talk and were very different from humans. My analysis, with which they both agreed, was that they were at odds about whether integration was merely a necessary condition for consciousness (which Scott was OK with) or also a sufficient condition (which Giulio claimed). The latter is clearly a stronger and more contentious claim, which I hope we can soon test experimentally.23
Читать дальше