“Wait,” Craine said. “You spoke of ‘handshakes.’ What’s that?”
“Entrance code, that’s all. Every big computer has a code you have to know to get into it. You give the computer the secret handshake and it’s willing to talk to you.”
“And it’s possible to figure these things out?”
“To some extent. It all depends. Mostly you get the code from some person who knows it — officer of the company, who’s a friend of yours, for instance. You’d be surprised how careless people are about codes. Mostly, I suppose, they have so little understanding of the computers, they’re unaware of the risk.”
“What are the risks?”
“Theft, sabotage. A good computer freak might get into the IRS computer and erase its whole file on him, or change it to gibberish, or assign it to Richard Nixon. Or he might add new features to the central computer’s program — little subroutines that amuse him or somehow benefit him. For instance, in one of the more elegant so-called computer crimes, someone as yet unidentified got into one of the big electric company computers and persuaded it that every time it rounded off to the nearest cent, it should drop the remainder in his bank account. Three million half pennies a month — that’s not bad pay for maybe twenty minutes’ work.”
“They happen often, these ‘so-called computer crimes’?”
“Nobody really knows. According to the FBI, about one percent get reported; I imagine that’s just about right.”
“And they pay pretty well, you say?”
“I read somewhere a while ago that in the average burglary, the take is $42.50, and with the average bank robbery the take is about $3,500. In the average computer crime — this is just in the one percent reported, within which one percent almost nobody gets caught — the take is $500,000.”
“That makes it very tempting. You ever thought of it yourself, Professor?”
“Naturally. Show me a first-rate computer man who tells you he hasn’t and I’ll show you a liar. I worked as a teller in a bank, years ago. We used to talk all through lunch about ways of stealing money — tellers, bookkeepers, even junior officers. We thought of some really foolproof schemes, but none of us ever took a nickel, so far as I know. It’s a matter of personality, motivation — satisfaction with your work, how your personal life’s going …”
“How much would I have to know to commit a computer crime?” “That’s hard to say. It’s as much a matter of native intelligence as it is your knowledge of handshakes or math or computer languages. I can tell you this: everyone down here except a few of the programmers could handle it.”
“Could Ira Katz?”
“I think he’d have to have help. That’s just a guess.”
“I assume you’re granting him native intelligence.”
“No question. But I think he worked with others, mainly. More a concept man than a hacker.”
“Mmm. A minute ago you said—” Craine paused, studied his pad. “I may have gotten lost, but let me ask you this anyway. A minute ago you said there are two ways computers can mess up reality. One of them you’ve talked about, how computers can change things that happen in the world — how in fact they can become so integral to what happens that they can no longer be, you might say, factored out.”
“Exactly. In the new world they’ve helped create, they’re a vital organ. Shut them down and you shut down the civilization.”
“I understand that, I think. Tell me the second point — how computers intercede, I think you said, between human beings and the world.”
“Something like this. It’s oversimplified, but it will give you the idea. What people think, generally, is that the computer does what the programmer tells it to, and since it’s locked in to effective procedures, it can never go wrong. That’s not exactly true. The truth is more nearly that the man at the console has very little notion of what’s going on in the mind of the computer. He sees lights flash on and off, and he knows it’s thinking something, but he has no idea what; in fact vast hunks of the computer’s thinking go on between blinks, not in the central routine of the computer but somewhere in the miles and miles of shadow.”
“I’m not following.”
“No, right. Look. I mentioned routines. Say we have a standard routine — that is a set of algorithmic instructions — for adding numbers. Now say one of the numbers to be added is √25. You can’t add square roots in with ordinary numbers, so when we get to √25 we have to stop adding — step out of the main routine, so to speak — and move to a different routine, call it a subroutine, which is designed to do nothing but figure out square roots. The subroutine rumbles along, off by itself, until it figures out that √25 = 5, at which point we ‘leave’ the subroutine and reenter the routine. This detour has taken us, on a slow computer, maybe a millionth of a second. So we’re clear now on routines and subroutines, right?
“All right. All these subroutines you keep in the computer — they’re part of its methodological memory, one of many kinds of memory. In a really complicated mathematical problem you might leave the routine and enter some sub or sub-sub or sub-sub-sub routine a hundred, two hundred, a thousand times. How does one man, in a single lifetime, program them all in, you ask me? The answer is, he doesn’t — and therein lies a tale.
“It’s a community effort, like the evolution of the universe. One programmer puts in the routine for square roots. Another, another day, puts in the routine for quadratic equations. Still another, another day — and so on and so on, generation on generation. The computer’s gifts and capabilities grow. Not only mathematicians make use of it but also demographers, physicists, psychologists, chess players. The computer begins to make decisions for itself — decisions we’re not even aware that it’s making. For example: I program in simultaneous play, at random intervals, of two games simultaneously — chess and pinochle. Sooner or later the call for a chess move and a pinochle move will coincide, and the computer will have to decide, if it can, which move to make first, chess or pinochle. Does the computer jam? go crazy? As it happens, it does not. Some sociologist happens to have left in it — maybe years ago — a formula stating that chess is a game of the upper class, pinochle a game of the lower class, and another formula, or symbolic statement, to be more precise — maintaining that the lower class tends to imitate the upper: so the computer plays the chess piece first.
“Wonderful, you may say. So all the people in AI, as it’s called — artificial intelligence — are quick to yell. But it seems a little odd that we should be so quick to embrace an intelligence utterly different from our own — exclusively left-hemisphere intelligence, if you will — and an intelligence we have no way to check on. Nearly all our existing programs, and especially the largest and most important ones, are patchworks of the kind I’ve somewhat metaphorically described. They’re heuristic in the sense that their construction is based on rules of thumb — stratagems that appear to ‘work’ under most foreseen circumstances — and ad hoc mechanisms patched in from time to time. The gigantic programs that run business and industry and, above all, government have almost all been put together — one can’t even say ‘designed’—by teams of programmers whose work has been spread over many years. By the time these systems are put on line, most of the original programmers have left or turned their attentions to other pursuits. A man named Marvin Minsky’s found a very good way of expressing it: a large computer program, he says, is like an intricately connected network of courts of law, that is, of subroutines, to which evidence is submitted by other subroutines. These courts weigh (evaluate) the data given them and then transmit their judgments to still other courts. The verdicts rendered by these courts may — indeed, often do — involve decisions about what court has ‘jurisdiction’ over the intermediate results then being manipulated. The programmer thus cannot even know the path of decision making within his own program, let alone what intermediate or final results it will produce.”
Читать дальше