In a visual pattern recognition system, a two-dimensional array of numbers representing the pixels of an image; or
In an auditory (e.g., speech) recognition system, a two-dimensional array of numbers representing a sound, in which the first dimension represents parameters of the sound (e.g., frequency components) and the second dimension represents different points in time; or
In an arbitrary pattern recognition system, an n -dimensional array of numbers representing the input pattern.
Defining the Topology
To set up the neural net, the architecture of each neuron consists of:
Multiple inputs in which each input is “connected” to either the output of another neuron or one of the input numbers.
Generally, a single output, which is connected to either the input of another neuron (which is usually in a higher layer) or the final output.
Set Up the First Layer of Neurons
Create N 0neurons in the first layer. For each of these neurons, “connect” each of the multiple inputs of the neuron to “points” (i.e., numbers) in the problem input. These connections can be determined randomly or using an evolutionary algorithm (see below).
Assign an initial “synaptic strength” to each connection created. These weights can start out all the same, can be assigned randomly, or can be determined in another way (see below).
Set Up the Additional Layers of Neurons
Set up a total of M layers of neurons. For each layer, set up the neurons in that layer.
For layer i:
Create N ineurons in layer i. For each of these neurons, “connect” each of the multiple inputs of the neuron to the outputs of the neurons in layer i–1(see variations below).
Assign an initial “synaptic strength” to each connection created. These weights can start out all the same, can be assigned randomly, or can be determined in another way (see below).
The outputs of the neurons in layer Mare the outputs of the neural net (see variations below).
The Recognition Trials
How Each Neuron Works
Once the neuron is set up, it does the following for each recognition trial:
Each weighted input to the neuron is computed by multiplying the output of the other neuron (or initial input) that the input to this neuron is connected to by the synaptic strength of that connection.
All of these weighted inputs to the neuron are summed.
If this sum is greater than the firing threshold of this neuron, then this neuron is considered to fire and its output is 1. Otherwise, its output is 0 (see variations below).
Do the Following for Each Recognition Trial
For each layer, from layer 0to layer M:
For each neuron in the layer:
Sum its weighted inputs (each weighted input = the output of the other neuron [or initial input] that the input to this neuron is connected to, multiplied by the synaptic strength of that connection).
If this sum of weighted inputs is greater than the firing threshold for this neuron, set the output of this neuron = 1, otherwise set it to 0.
To Train the Neural Net
Run repeated recognition trials on sample problems.
After each trial, adjust the synaptic strengths of all the interneuronal connections to improve the performance of the neural net on this trial (see the discussion below on how to do this).
Continue this training until the accuracy rate of the neural net is no longer improving (i.e., reaches an asymptote).
Key Design Decisions
In the simple schema above, the designer of this neural net algorithm needs to determine at the outset:
What the input numbers represent.
The number of layers of neurons.
The number of neurons in each layer. (Each layer does not necessarily need to have the same number of neurons.)
The number of inputs to each neuron in each layer. The number of inputs (i.e., interneuronal connections) can also vary from neuron to neuron and from layer to layer.
The actual “wiring” (i.e., the connections). For each neuron in each layer, this consists of a list of other neurons, the outputs of which constitute the inputs to this neuron. This represents a key design area. There are a number of possible ways to do this:
(1) Wire the neural net randomly; or
(2) Use an evolutionary algorithm (see below) to determine an optimal wiring; or
(3) Use the system designer’s best judgment in determining the wiring.
The initial synaptic strengths (i.e., weights) of each connection. There are a number of possible ways to do this:
(1) Set the synaptic strengths to the same value; or
(2) Set the synaptic strengths to different random values; or
(3) Use an evolutionary algorithm to determine an optimal set of initial values; or
(4) Use the system designer’s best judgment in determining the initial values.
The firing threshold of each neuron.
Determine the output. The output can be:
(1) the outputs of layer Mof neurons; or
(2) the output of a single output neuron, the inputs of which are the outputs of the neurons in layer M; or
(3) a function of (e.g., a sum of) the outputs of the neurons in layer M; or
(4) another function of neuron outputs in multiple layers.
Determine how the synaptic strengths of all the connections are adjusted during the training of this neural net. This is a key design decision and is the subject of a great deal of research and discussion. There are a number of possible ways to do this:
(1) For each recognition trial, increment or decrement each synaptic strength by a (generally small) fixed amount so that the neural net’s output more closely matches the correct answer. One way to do this is to try both incrementing and decrementing and see which has the more desirable effect. This can be time-consuming, so other methods exist for making local decisions on whether to increment or decrement each synaptic strength.
(2) Other statistical methods exist for modifying the synaptic strengths after each recognition trial so that the performance of the neural net on that trial more closely matches the correct answer.
Note that neural net training will work even if the answers to the training trials are not all correct. This allows using real-world training data that may have an inherent error rate. One key to the success of a neural net–based recognition system is the amount of data used for training. Usually a very substantial amount is needed to obtain satisfactory results. As with human students, the amount of time that a neural net spends learning its lessons is a key factor in its performance.
Variations
Many variations of the above are feasible. For example:
There are different ways of determining the topology. In particular, the interneuronal wiring can be set either randomly or using an evolutionary algorithm.
There are different ways of setting the initial synaptic strengths.
The inputs to the neurons in layer ido not necessarily need to come from the outputs of the neurons in layer i–1. Alternatively, the inputs to the neurons in each layer can come from any lower layer or any layer.
There are different ways to determine the final output.
The method described above results in an “all or nothing” (1 or 0) firing called a nonlinearity. There are other nonlinear functions that can be used. Commonly a function is used that goes from 0 to 1 in a rapid but more gradual fashion. Also, the outputs can be numbers other than 0 and 1.
The different methods for adjusting the synaptic strengths during training represent key design decisions.
The above schema describes a “synchronous” neural net, in which each recognition trial proceeds by computing the outputs of each layer, starting with layer 0through layer M. In a true parallel system, in which each neuron is operating independently of the others, the neurons can operate “asynchronously” (i.e., independently). In an asynchronous approach, each neuron is constantly scanning its inputs and fires whenever the sum of its weighted inputs exceeds its threshold (or whatever its output function specifies).
Читать дальше