Memory - the Hopfield Net
bbannerw.gif (3651 bytes)

Discussion of Memory in Humans

One of the most important functions of our brain is the laying down and recall of memories. It is difficult to imagine how we could function without both short and long term memory. The absence of short term memory would render most tasks extremely difficult if not impossible - life would be punctuated by a series of one time images with no logical connection between them. Equally, the absence of any means of long term memory would ensure that we could not learn by past experience. Indeed, much of our impression of self depends on remembering our past history.

Our memories function in what is called an associative or content-addressable fashion. That is, a memory does not exist in some isolated fashion, located in a particular set of neurons. All memories are in some sense strings of memories - you remember someone in a variety of ways - by the color of their hair or eyes, the shape of their nose, their height, the sound of their voice, or perhaps by the smell of a favorite perfume. Thus memories are stored in association with one another. These different sensory units lie in completely separate parts of the brain, so it is clear that the memory of the person must be distributed throughout the brain in some fashion. Indeed, PET scans reveal that during memory recall there is a pattern of brain activity in many widely different parts of the brain.

Notice also that it is possible to access the full memory (all aspects of the person's description for example) by initially remembering just one or two of these characteristic features. We access the memory by its contents not by where it is stored in the neural pathways of the brain. This is very powerful; given even a poor photograph of that person we are quite good at reconstructing the persons face quite accurately. This is very different from a traditional computer where specific facts are located in specific places in computer memory. If only partial information is available about this location, the fact or memory cannot be recalled at all.

A Description of the Hopfield Network

The Hopfield neural network is a simple artificial network which is able to store certain memories or patterns in a manner rather similar to the brain - the full pattern can be recovered if the network is presented with only partial information. Furthermore there is a degree of stability in the system - if just a few of the connections between nodes (neurons) are severed, the recalled memory is not too badly corrupted - the network can respond with a "best guess". Of course, a similar phenomenon is observed with the brain - during an average lifetime many neurons will die but we do not suffer a catastrophic loss of individual memories - our brains are quite robust in this respect (by the time we die we may have lost 20 percent of our original neurons).

The nodes in the network are vast simplifications of real neurons - they can only exist in one of two possible "states" - firing or not firing. Every node is connected to every other node with some strength. At any instant of time a node will change its state (i.e start or stop firing) depending on the inputs it receives from the other nodes.

If we start the system off with a any general pattern of firing and non-firing nodes then this pattern will in general change with time. To see this think of starting the network with just one firing node. This will send a signal to all the other nodes via its connections so that a short time later some of these other nodes will fire. These new firing nodes will then excite others after a further short time interval and a whole cascade of different firing patterns will occur. One might imagine that the firing pattern of the network would change in a complicated perhaps random way with time. The crucial property of the Hopfield network which renders it useful for simulating memory recall is the following: we are guaranteed that the pattern will settle down after a long enough time to some fixed pattern. Certain nodes will be always "on" and others "off". Furthermore, it is possible to arrange that these stable firing patterns of the network correspond to the desired memories we wish to store!

The reason for this is somewhat technical but we can proceed by analogy. Imagine a ball rolling on some bumpy surface. We imagine the position of the ball at any instant to represent the activity of the nodes in the network. Memories will be represented by special patterns of node activity corresponding to wells in the surface. Thus, if the ball is let go, it will execute some complicated motion but we are certain that eventually it will end up in one of the wells of the surface. We can think of the height of the surface as representing the energy of the ball. We know that the ball will seek to minimize its energy by seeking out the lowest spots on the surface -- the wells.

Furthermore, the well it ends up in will usually be the one it started off closest to. In the language of memory recall, if we start the network off with a pattern of firing which approximates one of the "stable firing patterns" (memories) it will "under its own steam" end up in the nearby well in the energy surface thereby recalling the original perfect memory.

The smart thing about the Hopfield network is that there exists a rather simple way of setting up the connections between nodes in such a way that any desired set of patterns can be made "stable firing patterns". Thus any set of memories can be burned into the network at the beginning. Then if we kick the network off with any old set of node activity we are guaranteed that a "memory" will be recalled. Not too surprisingly, the memory that is recalled is the one which is "closest" to the starting pattern. In other words, we can give the network a corrupted image or memory and the network will "all by itself" try to reconstruct the perfect image. Of course, if the input image is sufficiently poor, it may recall the incorrect memory - the network can become "confused" - just like the human brain. We know that when we try to remember someone's telephone number we will sometimes produce the wrong one! Notice also that the network is reasonably robust - if we change a few connection strengths just a little the recalled images are "roughly right". We don't lose any of the images completely.

Hopfield Network - A Simulation

As a simple example of this, we might design a neural network to store images of the numbers zero through nine represented as patterns on a two-dimensional grid. To be concrete, let there be 256 nodes on a 16-by-16 grid. Think of the nodes as "light bulbs", which can be on or off according to their firing state. We show such a grid in the following figure.  First, we must train the network to remember these images. To do this, we present the patterns one after another repeatedly to the network, and, by comparing the network output to the target output, we can adjust connections in order that the output of the network resembles the target output more and more closely.

It is often inconvenient to build such an artificial neural computer every time we want to experiment with a new network design or store a new set of memories. Instead what is often done is to create a simulation of such a network using a conventional computer. To do this we can write a computer program which emulates exactly what the true neural computer would do using a specially constructed program. Although this program takes much longer to execute than the neural computer it can be made to produce the same answers. It is then easy to modify this program to store a new set of memory patterns or to do some new task. We have written a computer program to simulate the action of a Hopfield neural network.

Typically, after an initial period of `training' the network is able to recall perfectly all ten input patterns or memories. Now, it is possible to add a random amount of noise to any given input pattern (by randomly switching some firing units from "on" to "off" and vice versa) and present it to the network. For small levels of noise (perhaps ten percent of the input pattern corrupted), the network can remember perfectly the stored image. However, for high levels of noise it can become confused and recall the wrong number. Notice also that, for this network, it is sometimes possible at high noise levels to get not an input pattern but its complement - a pattern gotten by reversing all firing states - "on" to "off" and "off" to "on". This is a peculiar but well-understood feature of the Hopfield network. Whenever we try to store a given image, we will automatically store its complement also.

It is also possible to break a few connections while still recovering a given pattern at least approximately. This can be illustrated by cutting a couple of connections between nodes and again presenting numbers to the network. The output is rather close to the original memory - the system is said to be robust.

We might ask the question - how many memories can be stored in such a network, how big (how many nodes) does the network need to have to be to store a given number of stable firing patterns? For the Hopfield network the answer is known (for other networks it is generally not!). The answer is that one needs to add about ten nodes to store one more image. But be careful, as I have stressed before, the new memory is not stored with these ten new nodes - the stored memories are properties of the entire network - they are associated with the strengths of all internode connections. It is these connections or artificial neural pathways which determine the activity and hence function of the neural network.

Thus we have seen that the simple Hopfield neural network can perform some of the functions of memory recall in a manner analogous to the way we believe the brain functions. But there is a major difference - for the artificial network we have to be smart to set up the connection strengths in just the right way in order to store a predetermined set of patterns. A smart "teacher" is needed to train the network what to remember. Once that is done, the network can be left to itself to handle the pattern-recall process. In the brain, there is no "teacher" that tells the neurons how to link up in order to store useful information - this part of the process is also automatic. The system is said to be self-organizing. In Artificial Intelligence we will describe recent efforts to design artificial networks which self-organize.

Prev: Basic Ideas Next: Decision Making and Learning.

Home | Contact Us  | Products Pricing | How To Order | Product Specifications | Links & Additional Technical Information |

 Copyright 1995 - 2013 Intelegen Inc. All rights reserved