Building a Human Computer
bbannerw.gif (3651 bytes)

Of course, it is entirely possible that the processes involved in brain function are so complex as to make an effective understanding of them impossible in any practical sense. Then AI is rendered a practical impossibility. At the other end of the spectrum a few scientists hold that there is nothing all that special about consciousness and that any machine packed with enough intelligence will automatically acquire consciousness along the way.

Our investigations seem to indicate that existing neural networks exhibit many promising features. They do not require sophisticated rule sets to be programmed in to them but function by a dynamic pattern matching mechanism much closer at least in spirit to the firing activities found in the brain. But at present the models are somewhat stupid - they can only learn by a slow procedure which requires constant supervision from an external `teacher'. Their decision making ability is strictly limited. While this has not stopped them being becoming a very useful tool in many areas, it effectively prevents them from tackling more complicated problems and ultimately from acquiring any degree of true intelligence.

What is really needed is a way of allowing the internode connections to change with time not according to some scheme determined by the external teacher but as a response to the node firings activated by input patterns. After all this is essentially what happens in the brain - the neurons and their connections self-organize into a structure which, considered as a whole, is capable of very sophisticated functions. The network must select its own output patterns and connection strengths dynamically. Furthermore, the association between input and output must be useful - the network must be able to make decisions as a consequence of its firings. If it is to store memories, it must first be able to see whether a new input is close to an old memory, or really new. If the latter it must be able to store the new pattern without destroying the old. It must be able to focus only on certain types of pattern and screen out the rest in order to perform useful tasks. Ultimately, it must be able to make complex decisions by a succession of hierarchical pattern association steps.

Rather surprisingly, there are new neural network models (for example the Kohonen network and Steven Grossberg's 1987 ART network) which attempt with some success to satisfy some of these criteria. These networks learn by a `competitive' process in which nodes on the hidden layers compete to represent the input image in such a way that the final representation of the input pattern is localized on a single winning unit. The way this happens is that when an image is presented to the network, some node on the hidden layer will respond most strongly to the image. The connections to the this node are then progressively strengthened in such a way as to increase the node's response to this   input pattern whilst the connections of all the other nodes are adjusted to minimize their response. Thus a given type of image can be made to excite only one hidden layer neuron. A new image can then be made to activate another node and so on. In this way different generic features of an input pattern can be handled by different hidden layer nodes. Furthermore, the hidden layer nodes can also have connections between each other which can be arranged in such a way that nodes that are strongly connected within the layer respond to similar images. For example, after such a competitive learning process one hidden layer node might respond to ellipses of a certain size, whilst one of its `neighbors' (those to which it has a strong intra-layer connection) might respond to ellipses of the same size but rotated through some angle.

This method of learning requires no `teacher' and is typically much faster than the supervised methods we discussed below. It also bears some resemblance to the learning mechanisms exhibited by certain types of neuron. It can also be more powerful in its classification capabilities - to use our old example, it may capable of spotting a triangle whatever its size, orientation and position in the input pattern plane.

Such models are still in their infancy, however the fields of science on which they are based - neuroscience and complex systems are advancing very rapidly and it is almost certain that great strides will be made in the near future in our understanding of artificial neural networks. Whether that increased understanding will be sufficient to build an intelligent machine - a computer with a mind - is an impossible question to answer at this time. Very many hurdles will have to be overcome and it is possible that we shall only ever achieve very crude results. But its a pretty good bet that the next decades will prove to be an exciting time for the subject of AI and the possibility of a machine mind.

Prev: Introduction to AI.

Biology of the Brain - Neural Networks - Glossary - Artificial Intelligence Key Points

Home | Contact Us  | Products Pricing | How To Order | Product Specifications | Links & Additional Technical Information |

 Copyright 1995 - 2013 Intelegen Inc. All rights reserved