We’ve had enough theory and practice of evolution in action in this book that I’m guessing you’ll be able to guess what I’m about to say. Here we go: any military that evolves robots on the battlefield will likely do so using the following principles:
* Robots are expendable . Given Pell’s Principle (see page 223), the only way to get large numbers of robots on the battlefield is to have them be expendable, and that usually means inexpensive. Large numbers also ensure that sufficient variation is in place to allow selection to act on the population. Large numbers also serve the tactical advantage just explained with regards to overwhelming the adversary with a swarm of simple autonomous agents.
* Robots are simple . This, too, follows from Pell’s Principle. The way to make robots expendable is to make them simple. Simple also usually means inexpensive to make and quick to produce. Employ the KISS principle in your design. Find the minimum brain, body, and behavior needed to seed your population. Choose which characters evolve.
* Robots evolve quickly . Given that generation time is one factor that limits the rate of evolution, make the generation time short. Short generation time will minimize the response time between a change on the battlefield and the change in the behavior and hardware of the population of robots.
* Robots evolve in small cohorts (small and genetically isolated subpopulations). Given that evolutionary change occurs rapidly in small, isolated populations, create many small companies of robots rather than a single large battalion. Keep in mind that random influences will dominate if the population is too small. Note that this may, at first, seem to run counter to principle 1 and using large numbers. You can have multiple populations or companies in simultaneous operation.
* Robots diversify in generational time . Given that evolutionary change will be rapid with principles (3) and (4), then allow the companies of robots to speciate, to evolve along different evolutionary trajectories. Diversification will allow more and different kinds of adaptation to occur simultaneously, thus increasing the chances of both tracking changes in the environment and finding the best solution at a given time and place.
That’s the offensive view of military Evolvabots. What about defense? How would you stop an army, navy, or air force of evolving robots? Keep in mind that even if it’s your militia of Evolvabots, you will need ways to constrain and control them too. This is starting to sound like nearly every story and movie on robots in which they rise up and break the shackles of their creators to take over the world. Although we aren’t making a movie, let’s run with the plot line anyway. To avoid military defeat or robotic overthrow, here’s what you do:
* Limit initial production prior to battle. Control numbers and types of robots. Constrain raw materials. Limit, reduce, or eliminate energy sources. If the population is yours, you may want to design in hard production or run-time limits to avoid the enemy co-opting the group.
* Limit reproduction on the battlefield . Because reproduction is key to the evolutionary process and usually involves some vulnerable moments, like finding mates and creating offspring, you should target these situations. Also, target the machine-making machines, because they need to be on the battlefield to keep generation times short. Limits to production (see item #1, above) can also be employed on the battlefield by cutting supply lines.
* Limit repair on the battlefield . Injury provides another vulnerable situation. If robots are self-repairing, their function will be impaired. Capture low-performance agents preferentially. If the injured robots are being repaired by other agents, target the repair teams.
* Evolve predatory robots. If you or the enemy employ Pell’s Principle, you’ll need to be prepared to capture or destroy swarms. For starters, you’ll need to let your evolving predators have the capacity and capability of filter feeders like baleen whales. Consider behavioral adaptation first in your predators because the shorter generation time of the prey will limit opportunities for hardware evolution in the predators.
* Make complicated robots . That’s right. Want to control your own robots? Make them complicated. Because complicated usually means expensive, you are likely to have only a few of them. You will also hesitate to send them into harm’s way, as Chuck was predicting. Furthermore, complicated robots will never take over because the laws of probability virtually guarantee their failure. If every component has, let’s say, a 99 percent chance of not failing on a given mission, that sounds pretty good, right? But what if you have two such components in your robot? By the law of independent probabilities, we take the product of the two: 0.99 x 0.99 = 0.98. Not bad. A 98 percent chance of the system, composed of those two components, not failing. Now give your system two thousand components, not unreasonable for some of the more sophisticated fish robots I’ve seen. That’s 0.99 2000= 0.0000000002. You’ve got no chance—your robot will fail! The way we keep complicated machines like airliners and space shuttles in business is by building components with lower failure rates (0.99999), engineering redundancy into the machine’s critical systems, and inspecting and replacing parts before they fail. Bottom line: to ensure control of your Evolvabots, make them out of many crappy components.
COMMAND AND CONTROL
My Hollywood alarmism about evolving robots on the battlefield may have you thinking that all of this warfare stuff is just fantasy nonsense. Maybe it is. But let’s pretend that you are an admiral and you have to make that call: are military Evolvabots nonsense or good sense? Will some other military surprise us with Evolvabots in battle? This matters because you, as admiral, need to make practical decisions that have long-term consequences. Do you put your limited resources into an offensive Evolvabot development project? Or do you put resources into Evolvabot defensive countermeasures? Keep in mind that if you do use resources on Evolvabots, you have to cut the budget of other tactical systems. How can you be sure that Evolvabots will ever be a serious risk and worth the cost of development or countermeasures?
You can take DARPA’s tack and examine feasibility. Presented with the idea of commanding a fleet of evolving robotic fish, for example, you might want to assess one of the most important aspects of any battle: communication. If no one can figure out how to communicate with a swarm of underwater robots and adjust plans as the battle commences, then you probably don’t have much to worry about.
Communication before and during battle is paramount for the simple reason that, in the words of the Helmuth von Moltke the Elder, chief of staff of the Prussian Army for thirty years, “No plan survives contact with the enemy.” Any battlefield is chaotic swarm intelligence in action. For a battle plan to adapt, each agent has to know the purpose of the mission, understand their part in it, have the ability to communicate on the battlefield to update their tactical knowledge of the enemy, coordinate with other agents and adjacent units about their positions and disposition, and make decisions quickly as information deteriorates and change accelerates.
The first element of communication and decision making on the battlefield starts prior to engagement, and it’s called the commander’s intent:
The commander’s intent describes the desired endstate. It is a concise statement of the purpose of the operation and must be understood two levels below the level of the issuing commander. It must clearly state the purpose of the mission. It is the single unifying focus for all subordinate elements. It is not a summary of the concept of the operation. Its purpose is to focus subordinates on what has to be accomplished in order to achieve success, even when the plan and concept no longer apply, and to discipline their efforts toward that end. [213] US Army Field Manual (FM) 100–105, Operations (Washington, DC: Government Printing Office [GPO], 1993), 6.
Читать дальше