PAT: This reminds me of a Taoist story I read. It goes something like this. Two sages were standing on a bridge over a stream. One said to the other, “I wish I were a fish. They are so happy!” The second replied, “How do you know whether fish are happy or not? You’re not a fish.” The first said, “But you’re not me, so how do you know whether I know how fish feel?”
SANDY: Beautiful! Talking about consciousness really does call for a certain amount of restraint. Otherwise you might as well just jump on either of the solipsism bandwagons—“I am the only conscious being in the universe”—or the panpsychism bandwagon—“Everything in the universe is conscious!”
PAT: Well, how do you know? Maybe everything is conscious.
SANDY: If you’re going to join those who claim that stones, and even particles like electrons have some sort of consciousness, then I guess we part company here. That’s a kind of mysticism I can’t fathom. As for chess programs, I happen to know how they work, and I can tell you for sure that they aren’t conscious! No way!
PAT: Why not?
SANDY: They incorporate only the barest knowledge about the goals of chess. The notion of “playing” is turned into the mechanical act of comparing a lot of numbers and choosing the biggest one over and over again. A chess program has no sense of shame about losing or pride in winning. It’s self model is very crude. It gets away with doing the least it can, just enough to play a game of chess and do nothing more. Yet, interestingly enough, we still tend to talk about the “desires” of a chess-playing computer. We say, “It wants to keep its king behind a row of pawns,” or “It likes to get its rooks out early,” or “It thinks I don’t see that hidden fork.”
PAT: Well, we do the same thing with insects. We spot a lonely ant somewhere and say, “It’s trying to get back home” or “It wants to drag that dead bee back to the colony.” In fact, with any animal we use terms that indicate emotions, but we don’t know for sure how much the animal feels. I have no trouble talking about dogs and cats being happy or sad, having desires and beliefs and so on, but of course I don’t think their sadness is as deep or complex as human sadness is.
SANDY: But you wouldn’t call it “simulated sadness”, would you?
PAT: No, of course not. I think it’s real.
SANDY: It’s hard to avoid use of such technological or mentalistic terms. I believe they’re quite justified, although they shouldn’t be carried too far. They simply don’t have the same richness of meaning when applied to present day chess programs as when applied to people.
CHRIS: I still can’t see that intelligence has to involve emotions. Why couldn’t you imagine an intelligence that simply calculates and has no feelings?
SANDY: A couple of answers here! Number one, any intelligence has to have motivations. It’s simply not the case, whatever many people may think, that machines could think any more “objectively” than people do. Machines, when they look at a scene, will have to focus and filter that scene down into some preconceived categories, just as a person does. It means giving more weight to some things than others. This happens on every level of processing.
PAT: What do you mean?
SANDY: Take me right now, for instance. You might think that I’m just making some intellectual points, and I wouldn’t need emotions to do that. But what makes me care about these points? Why did I stress the word “care” so heavily? Because I’m emotionally involved in this conversation! People talk to each other out of conviction, not out of hollow, mechanical reflexes. Even the most intellectual conversation is driven by underlying passions. There’s an emotional undercurrent to every conversation—it’ s the fact that the speakers want to be listened to, understood, and respected for what they are saying.
PAT: It sounds to me as if all you’re saying is that people need to be interested in what they’re saying, otherwise a conversation dies.
SANDY: Right! I wouldn’t bother to talk to anyone if I weren’t motivated by interest. And interest is just another name for a whole constellation of subconscious biases. When I talk, all my biases work together and what you perceive on the surface level is my style, my personality. But that style arises from an immense number of tiny priorities, biases, leanings. When you add up a million of these interacting together, you get something that amounts to a lot of desires. It just all adds up! And that brings me to the other point, about feelingless calculation. Sure, that exists—in a cash register, a pocket calculator. I’d even say it’s true of all today’s computer programs. But eventually, when you put enough feelingless calculations together in a huge coordinated organization, you’ll see something that has properties on another level. You can see it—in fact , you have to see it—not as a bunch of little calculations, but as a system of tendencies and desires and beliefs and so on. When things get complicated enough, you’re forced to change your level of description. To some extent that’s already happening, which is why we use words such as “want,” “think,” “try,” and “hope,” to describe chess programs and other attempts at mechanical thought. Dennett calls that kind of level switch by the observer “adopting the intentional stance.” The really interesting things in AI will only begin to happen, I’d guess, when the program itself adopts the intentional stance towards itself!
CHRIS: That would be a very strange sort of level-crossing feedback loop.
SANDY: It certainly would. Of course, in my opinion, it’s highly premature for anyone to adopt the intentional stance, in the full force of the term, towards today’s programs. At least that’s my opinion.
CHRIS: For me an important related question is: To what extent is it valid to adopt the intentional stance toward beings other than humans?
PAT: I would certainly adopt the intentional stance toward mammals.
SANDY: I vote for that.
CHRIS: That’s interesting! How can that be, Sandy? Surely you wouldn’t claim that a dog or cat can pass the Turing test? Yet don’t you think that the Turing test is the only way to test for the presence of thought? How can you have these beliefs at once?
SANDY: Hmm.... All right. I guess I’m forced to admit that the Turing test works only above a certain level of consciousness. There can be thinking beings that could fail the test—but on the other hand, anything that passes it, in my opinion, would be a genuinely conscious thinking being.
PAT: How can you think of a computer as a conscious being? I apologize if this sounds a stereotype, but when I think of conscious beings, I just can’t connect that thought with machines. To me consciousness is connected with soft, warm bodies, silly though that may seem.
CHRIS: That does sound odd, coming from a biologist. Don’t you deal with life in terms of chemistry and physics for all that magic to seem to vanish?
PAT: Not really. Sometimes the chemistry and physics just increase the feeling that there’s something magical going on down there! Anyway, I can’t always integrate my scientific knowledge with my gut-level feelings.
CHRIS: I guess I share that trait.
PAT: So how do you deal with rigid preconceptions like mine?
SANDY: I’d try to dig down under the surface of your concept of “machines” and get at the intuitive connotations that lurk there, out of sight, but deeply influencing your opinions. I think that we all have a holdover image from the Industrial Revolution that sees machines as clunky iron contraptions gawkily moving under the pressure of some loudly chugging engine. Possibly that’s even how the computer inventor Charles Babbage viewed people! After all, he called his magnificent many-geared computer the Analytical Engine.
Читать дальше