The Normative Knowledge source experiments with the extension of norms. As the system progresses, clear standards begin to emerge that lead toward likely optimal solutions. If a new, greater optimal solution is found that exceeds the standards, then the knowledge source will readjust its constraints to encompass the new optimal solution.
In the CAT System, the Normative Knowledge is represented by restricted ranges within the available ranges of the inputs. These ranges are adjusted over successive steps, with less optimal subsections of the ranges on the edges of the constraints discarded, and the intersections of these ranges of value are where all of the predictions from the Normative Knowledge source will be found. In ConesWorld, this can be visualized as two rectangular subsections of the topography that intersect, each subsection representing a single variable of the domain, within which agents will be placed.
The Domain Knowledge is reflective of the world in which the problem itself exists. For example, a stock market program would be aware of seasonal trends, a system that works to determine optimal speed around a track would need to understand feasible speed around a curve, and a system that analyzes ConesWorld would need to understand the nature of the sloping topography. It examines the relationship between the objects that exist in a problem domain and how their interactions can lead to solutions.
In the CAT System, the Domain Knowledge of which the system is aware is the nature of a cone. Through the use of a collection of agents and the slope of the area the agent is placed at, they can attempt to calculate where these different slopes would place a possible peak to be sought and a valley to be avoided. The more widely spread the agents are, the more inaccurate the prediction, yet the more tightly packed the agents, the more likely they are to be trapped in a suboptimal local maximum.
To move the agents, the knowledge system influences them to gradually advance via a set step size in the direction that their current location's slope indicates, or along the direction indicated by the overall group's slope with regards to how many agents' slope directions point toward a common point. This is repeated for each dimension in the problem domain, with a small element of randomization included with the step size.
When the agents are spread out, the Domain Knowledge acts in an explorative manner through the combined efforts of all of its agents. When they converge on a given point, it acts in an exploitative manner and agents can even become trapped if they all converge on a suboptimal point with a slope of zero at the peak of a cone.
If this is a suboptimal point, then the actions of another knowledge source can rescue the trapped agents. When the rank of a knowledge source drops in comparison to the other knowledge sources, then agents previously under its influence can come under the influence of a different knowledge source. This will move them away from the suboptimal trap, at which point it is possible that Domain Knowledge will begin to exert influence again.
For this reason, the Domain Knowledge acts as a buffer between the Exploitative and Explorative Knowledge sources. It adapts to the nature of the problem domain via exploration, focuses in via exploitation, can then become trapped due to suboptimals, and then return to its explorative stage when another knowledge source provides it with fresh knowledge. It explores the possibilities of the system's governing rules and exploits them as well.
Historical (or Temporal) Knowledge exploits past knowledge to guide agents in future predictions. By analyzing successful agents of the past and making cautious refinements on their variables, it is possible to predict a new optimal solution near to a high‐ranking solution of the past.
To achieve this in the CAT system, a population of successful agents is recorded, in addition to a drift variable. The drift is the tolerance within which an agent must variate from the past recorded agent to be deemed worth the attempt. Even though this knowledge source exploits past successes, its drift ensures that it will not simply repeat the exact scenario of a past success for any given agent.
As the CAT system also has a dynamic update for the landscape, the Historic Knowledge source has the ability to update itself in this situation as well. In a typical run, the historic record is kept and pruned as some past achievements are outpaced by a significant degree. The drift of the Historic Knowledge source can allow the prediction of a newly created optimal on the dynamic landscape if the new optimal location is a slight variation of a previous historic scenario.
The most exploitative of all of the knowledge sources, Situational Knowledge finds an agent with a highly scored combination of variables and uses it as an exemplar for all other agents to follow in all subsequent scenarios.
Situational Knowledge is highly vulnerable to false‐positives, and in a large enough landscape it can become absolutely impossible for it to detect other possible maximums on its own. However, it is excellent at refining local solutions, often variating only the smallest of feasible changes in any one variable in its search for an improvement to the best known result.
In the CAT System, this is achieved by a tightly clustered local search of areas that deviate only slightly from the position of the best‐known agent. While the Situational Knowledge focuses on the position of agent with the highest fitness, it has a voting system based on different agents and different scores. For example, if a newly discovered agent is found at a tremendous distance from the currently focused‐on position of the Situational Knowledge source, and the increase in performance is minimal, then the Situational Knowledge may vote to remain and more closely examine the immediate area around its currently selected agent.
This influence function is carried out using a roulette wheel, where each piece of the wheel is composed of subsets of elite agents. With regards to the distance between agents described above, agents near possible optimal solutions are subdivided into elite groups, and the roulette wheel's wedges are composed of the best agents from the best subsets. Each agent then spins the wheel and moves to the location given by the best agent selected, plus a randomized position variable addition of the smallest variation possible in the scenario.
Each problem available within the CAT System is contained within a Fitness Function, which not only dictates the domain of the problem but also the means by which it can possibly be updated, the metrics by which it can be judged, and any possible cutoff that could be reached by discovering a sought‐after goal.
First basic initialization is performed in which the domain of the problem is established, such as any necessary variables or placeholders, predefined boundaries or caps, or even a maximum possible fitness that the system is to work towards. There are also two functions called CalculateFitness and CalculateFunctionValue. In some cases, it is entirely possible that to get the fitness of a given fitness function, calculating the function itself will yield its fitness. Some other cases have a fitness which is separate from the function value calculated by the fitness function itself. Both Fitness and FunctionValue are calculated by being passed a point in N ‐dimensional space, containing the given values for the variables that can alter the results of the system.
If a fitness function has a dimensional input smaller than three (meaning either two or one dimensional inputs), then it is possible for the system to display it in a manner used by the ConesWorld Fitness Function, where the two components of the input represent the X and Z coordinates, and the fitness at that location comprises the Y coordinate ( Figure 2.3).
Читать дальше