Neural Networks Module

The module below can be covered in a three week period in the Introduction to AI Course


This module on Neural Networks was written by Ingrid Russell of the University of Hartford . It is being printed with permission from Collegiate Microcomputer Journal.

If you have any comments or suggestions, please send email to irussell@mail.hartford.edu


Definition of a Neural Network

Neural networks have a large appeal to many researchers due to their great closeness to the structure of the brain, a characteristic not shared by more traditional systems.

In an analogy to the brain, an entity made up of interconnected neurons, neural networks are made up of interconnected processing elements called units, which respond in parallel to a set of input signals given to each. The unit is the equivalent of its brain counterpart, the neuron.

A neural network consists of four main parts:

1. Processing units{uj}, where each uj has a certain activation level aj(t) at any point in time.

2. Weighted interconnections between the various processing units which determine how the activation of one unit leads to input for another unit.

3. An activation rule which acts on the set of input signals at a unit to produce a new output signal, or activation.

4. Optionally, a learning rule that specifies how to adjust the weights for a given input/output pair.

A processing unit uj takes a number of input signals, say a1j, a2j,...,anj with corresponding weights w1j, w2j,...,wnj, respectively. The net input to uj given by:

netj = SUM (wij * aij)

The new state of activation of uj given by:

aj(t+1) = F(aj(t),netj),

where F is the activation rule and aj(t) is the activation of uj at time t. The output signal oj of unit uj is a function of the new state of activation of uj:

oj(t+1) = fj(aj(t+1)).

One of the most important features of a neural network is its ability to adapt to new environments. Therefore, learning algorithms are critical to the study of neural networks.





Copyright 1996 by Ingrid Russell.