In the best case the commander’s intent is known and understood by all sailors or soldiers so that as the plan deteriorates in battle, individuals can use adaptive behavior to advance the mission. “The commander’s intent,” suggests Chuck, “should be embodied, embedded, in the warriors.” That means that design of the military Evolvabots has to involve Command and Control during design because intelligence and intent, as we’ve seen throughout this book, are part of not just the programmable nervous system but also the type, arrangement, and quality of sensors, motors, and chassis.
After talking to Chuck I was wondering if the commander’s intent (CI) itself could serve as the fitness function for military Evolvabots. Whatever the CI—cause maximum damage to target X or guard squadron Y or rescue fleet Z—the ongoing performance of individual Evolvabots can be judged relative to it. Because the performance of each individual is compared to that of others in its population, the feedback about what works is relevant automatically. It turns out that engineers working for the US Army have already tried, in digital simulation, the idea of using the CI as the fitness function: “Evolution continues until the system produces a final population of high-performance plans which achieve the commander’s intent for the mission.” [214] R. H. Kewley and M. J. Embrechts, “Computational Military Tactical Planning System,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 32, no. 2 (2002): 161–171.
Did you get that? Tactical plans, which are extremely complicated themselves, can be evolved using genetic algorithms that use the CI as the fitness function.
The fundamental communication issue on the battlefield is that small groups and isolated individuals have to make decisions without checking with their commanders. As Lieutenant Colonel Lawrence G. Shattuck, professor, Behavior and Sciences and Leadership at the US Military Academy, West Point, has written, the pace of events on the battlefield often precludes direct contact with superiors even if communication channels are open. [215] L. G. Shattuck, “Communicating Intent and Imparting Presence,” Military Review 80, pt. 2 (March–April 2000): 66–72.
Once communication ceases, for whatever reason, soldiers need to know the CI to help frame their decisions, getting inside the commander’s head to know how she would be making the decision, according to Shattuck.
EVOLVABOTS GET A CONSCIENCE
The central technical challenge is this: get autonomous robots working, communicating, self-repairing, reproducing, and evolving in the wild, without help from humans. Proximal challenges, in addition to the ones already discussed in this chapter, include the following:
* How do we embed the fitness function in a population of freely roaming Evolvabots?
* How do we have that fitness function, which is imposed by humans, be an automatic part of the world in which the Evolvabots are working?
* Or do we let the fitness function be unspecified but emerge from the survival of the robots in the world?
* In any of these scenarios, how do we monitor and control Evolvabots in the wild?
If these issues can be solved, then everything in this chapter is feasible.
But wait. Just because we can do something, does that mean that we want to—or should? The central ethical challenge, framed by Ronald Arkin in his research on the topic for the US Army Research Office, is this: get autonomous robots to behave “within the bounds prescribed by the Laws of War and Rules of Engagement.” [216] Ronald Arkin, an expert robotics engineer, is the leader in considering both the practical and philosophical aspects of the ethics of using robots in war. His first paper on the subject is a good place to start: “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture—Part 1: Motivation and Philosophy,” Proceedings of Human-Robot Interaction 2008 , Amsterdam, Netherlands, 2008.
“The advent of autonomous robotics in the battlefield,” writes Arkin, “as with any new technology, is primarily concerned with Jus in Bello [acceptable limits to conduct in war], that is, defining what constitutes the ethical use of these systems during conflict, given military necessity.” [217] Ronald Arkin, Governing Lethal Behavior in Autonomous Robots (Boca Raton, FL: Chapman & Hall/CRC, 2009), 2.
Arkin’s goal—which I support wholeheartedly—is to have our military robots outperform our human soldiers in terms of ethical conduct. Evolving robots need a conscience.
FIGURE 8.4. So many possible paths, so little retro-futuristic time. The author points in the conceptual direction that he predicts the field of evolutionary biorobotics will take. He is correct 100 percent of the time.
CLOSING AND OPENING REMARKS—RETROFUTURISM
Predicting the future is even easier than understanding the past. That is the fundamental tenet (tenet number 1), as I see it, of the art movement created by Lloyd Dunn, known as Retrofuturism (Figure 8.4). As you’ve seen in this chapter, I’ve been able to predict, with virtually no data to support my arguments, a scenario in which we are at the beginning of a new kind of military arms race. Evolving robots, I claim, will alter the way we fight wars and defend ourselves. For retrofuturistic completeness I should also predict the exact opposite (tenet number 2), namely that evolving robots are a trivial sideshow in the growing field of robotics and have nothing to tell us about the future of warfare.
I don’t really think that the second prediction is true. Too bad. In all seriousness, this chapter has been a bummer, right? Who wants to talk about war and autonomous killing machines when we can talk about studying the evolution of the first vertebrates instead? I don’t. But the reality is that evolving robots are and will be created for academic, industrial, and military purposes. This means that we should all become students of robots of any kind, whether they be evolving robots, nonevolving autonomous robots, or semiautonomous and remotely controlled military robots. We need to understand robots so we can proceed with due caution and deliberation. No secrets. No surprises.
Now for an apology. In this book I’ve covered just a tiny sliver of the world of robotics: evolving robots. And I haven’t even done that little bit justice. I’m sorry. For example, I’ve talked mostly about the work done by myself and my collaborators, referring just occasionally and superficially to the great researchers who have inspired us: Ronald Arkin, who is creating the field of robot ethics after unifying behavior-based robotics together; Barbara Webb, who created biorobotics; Stefano Nolfi and Dario Floreano, who created evolutionary robotics; and Valentino Braitenberg and Rodney Brooks, who cocreated the field of behavior-based robotics that jump-started all of the above. To help overcome my guilt for giving all these masters short shrift, I’ll tell you that they all have written great books on their subjects, and you should read them.
I should make a parallel apology for the world of evolutionary biology. With a head start of over a hundred years on robotics, evolutionary biology and my omissions of name are more difficult to characterize and recognize. I can tell you that I’ve largely ignored the fascinating world of EvoDevo, the evolution of ontogenies and the constraints and possibilities that developmental systems give to the species they construct. Sean Carroll is the place to start reading there. Vertebrate paleontologists like David Raup, Steven Stanley, Robert Carroll, and Michael Benton ought to feel slighted because they have carried the torch and blazed the trail with their excellent textbooks. The great biomechanicists, McNeil Alexander, Tom Daniel, John Gosline, Mark Denny, Paul Webb, Andy Biewener, get nary a mention. You have to leave out a lot, I’ve learned, when you write a book.
Читать дальше