Neural Networks - Glossary

bbannerw.gif (3651 bytes)

Artificial neural networks:
Computers whose architecture is modeled after the brain. They contain idealized neurons called nodes which are connected together in some network. Two types of such network have been considered in this module - the Hopfield network and the Perceptron network. The former can model the memory recall process in the brain, the latter can perform simple pattern recognition tasks.
Associative memory:
Also called `content-addressable' memory. This type of memory is not stored on any individual neuron but is a property of the whole network. It is by inputting to the network part of the memory. This is very different from conventional computer memory where a given memory (or piece of data) is assigned a unique address which is needed to recall that memory.
Back-propagation:
A name given to the process by which the Perceptron neural network is `trained' to produce good responses to a set of input patterns. In light of this the Perceptron network is sometimes called a `back-prop' network.
CPU:
Central processing unit. The `heart' of a traditional computer. The CPU coordinates all activity in the machine by following a precise set of instructions - the software.

 

Encode network:
A Perceptron network designed to illustrate that the hidden layer nodes play a crucial role in allowing the network to learn about special features in the input patterns. Once it has learnt about the `generalized' features of the training pattern sit it can respond usefully in new situations.
Generalization:
A measure of how well a network can respond to new images on which it has not been trained but which are related in some way to the training patterns. An ability to generalize is crucial to the decision making ability of the network.
Hopfield network:
A particular example of an artificial neural network capable of storing and recalling memories or patterns. All nodes in the network feed signals to all others.
NetTalk:
A Perceptron-type network capable of reading aloud English text with the aid of a voice synthesizer.
Node state:
A node can be excited into firing signals at different levels of activity. The state of the node describes how active in firing the node is.
Pattern recognition:
Ability to recognize a given sub pattern within a much larger pattern. Alternatively, a machine capable of pattern recognition can be trained to extract certain features from a set of input patterns.
Perceptron:
An artificial neural network capable of simple pattern recognition and classification tasks. It is composed of three layers where signals only pass forward from nodes in the input layer to nodes in the hidden layer and finally out to the output layer. There are no connections within a layer.
Self-organizing:
A network is called self-organizing if it is capable of changing its connections so as to produce useful responses for input patterns without the instruction of a smart teacher.

Biology of the Brain - Neural Networks - Artificial Intelligence - Glossary - Key Points


Mind and Machine home page


Brought to you by the Neural Transmitters behind The Mind and Machine Module.

Home | Contact Us  | Products Pricing | How To Order | Product Specifications | Links & Additional Technical Information |

 Copyright 1995 - 2013 Intelegen Inc. All rights reserved