|
“Certainly, one of the most relevant and obvious characteristics of a present moment is that it goes away, and that characteristic must be represented internally.”
| |
Stated plainly[1], the principle behind multitemporal synapses is that we maintain the blunt “residue” of past lessons in long-term connections, while everything else is quickly forgotten, and learned over again, in the instant. In other words, we re-learn the detailed parts of our responses as we are confronted with each new current situation.[2]
One of the primary benefits of applying this principle, in the form of multitemporal synapses, is a neural network construct that is completely free of the usual problems associated with catastrophic forgetting. When you eliminate catastrophic forgetting from your neural network structure, the practical result is the ability to develop networks that continuously learn from their surroundings, just like their natural counterparts.
|
|
One major challenge with conventional neural network models has been in how to maintain connections that store enough intricate in-the-moment response-details to deal with any contingency that the system may encounter. Conventionally, such details would overwhelm long-term lessons stored in permanent connections-weights. This characteristic of conventional neural network models is known as
The Stability Plasticity Problem, and is the underlying cause of "
catastrophic forgetting."
When an artificial neural network that has learned a training set of responses, then encounters a new response to be learned, the result is usually ‘
catastrophic forgetting’ of all earlier learning. Training on the new detail alters connections that are maintained by the network in a holistic (global) fashion. Because of this, it is almost certain that such a change will radically alter the outputs that were desired for the original training set.