activation / initialisation function

The timevarying value that is the output of a neuron.

backpropagation (generalised deltarule)

A name given to the process by which the Perceptron neural network is "trained" to produce good responses to a set of input patterns. In light of this, the Perceptron network is sometimes called a "backprop" network.

bias

The net input (or bias) is proportional to the amount that incoming neural activations must exceed in order for a neuron to fire.

connectivity

The amount of interaction in a system, the structure of the weights in a neural network, or the relative number of edges in a graph.

pattern recognition

The act of identifying patterns within previously learned data. This can be carried out by a neural network even in the presence of noise or when some data is missing.

epoch

One complete presentation of the training set to the network during training.

input layer

Neurons whose inputs are fed from the outside world.

learning algorithms (supervised, unsupervised)

An adaptation process whereby synapses, weights of neural network's, classifier strengths, or some other set of adjustable parameters is automatically modified so that some objective is more readily achieved. The backpropagation and bucket brigade algorithms are two types of learning procedures.

Learning rule

The algorithm used for modifying the connection strengths, or weights, in response to training patterns while training is being carried out.

layer

A group of neurons that have a specific function and are processed as a whole. The most common example is in a feedforward network that has an input layer, an output layer and one or more hidden layers.

MonteCarlo method

The MonteCarlo method provides approximate solutions to a variety of mathematical problems by performing statistical sampling experiments on a computer.

multilayerperceptron (MLP)

A type of feedforward neural network that is an extension of the perceptron in that it has at least one hidden layer of neurons. Layers are updated by starting at the inputs and ending with the outputs. Each neuron computes a weighted sum of the incoming signals, to yield a net input, and passes this value through its sigmoidal activation function to yield the neuron's activation value. Unlike the perceptron, an MLP can solve linearly inseparable problems.

Neural Network (NN)

A network of neurons that are connected through synapses or weights. Each neuron performs a simple calculation that is a function of the activations of the neurons that are connected to it. Through feedback mechanisms and/or the nonlinear output response of neurons, the network as a whole is capable of performing extremely complicated tasks, including universal computation and universal approximation. Three different classes of neural networks are feedforward, feedback, and recurrent neural networks, which differ in the degree and type of connectivity that they possess.

neuron

A simple computational unit that performs a weighted sum on incoming signals, adds a threshold or bias term to this value to yield a net input, and maps this last value through an activation function to compute its own activation. Some neurons, such as those found in feedback or Hopfield networks, will retain a portion of their previous activation.

output neuron

A neuron within a neural network whose outputs are the result of the network.

perceptron

An artificial neural network capable of simple pattern recognition and classification tasks. It is composed of three layers where signals only pass forward from nodes in the input layer to nodes in the hidden layer and finally out to the output layer. There are no connections within a layer.

sigmoid function

An Sshaped function that is often used as an activation function in a neural network.

threshold

A quantity added to (or subtracted from) the weighted sum of inputs into a neuron, which forms the neuron's net input. Intuitively, the net input (or bias) is proportional to the amount that the incoming neural activations must exceed in order for a neuron to fire.

training set

A neural network is trained using a training set. A training set comprises information about the problem to be solved as input stimuli. In some computing systems the training set is called the "facts" file.

weight

In a neural network, the strength of a synapse (or connection) between two neurons. Weights may be positive (excitatory) or negative (inhibitory). The thresholds of a neuron are also considered weights, since they undergo adaptation by a learning algorithm.
