About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
A neural network innovation described in the book: Netlab Loligo has been awarded a patent (#7,904,398). — Of the innovations described in the book, it is the second to receive letters patent (so far ). The patent is titled:
“Artificial Synapse Component Using Multiple Distinct Learning Means With Distinct Predetermined Learning Acquisition Times”
Patent titles serve mainly as an aid for future patent searchers. The patented innovation, along with the underlying concepts and principles that led to it are described and discussed in the book, where they are simply referred to as “Multitemporal Synapses.”
The primary advantage imparted by the innovation is that it gives adaptive systems a present moment in time. This allows them to quickly and intricately adapt to the detailed response needs of their present situation, without cluttering up long term memories with the minute details of those responses.
Multitemporal Synapses
This is a blog entry here that tries to describe Multitemporal Synapses. When time permits, I will try to provide a new blog entry with a clearer explanation using book excerpts (P.S. see above entry). It will be specifically geared to laymen. If you are interested, please subscribe to the feed.
Influence Learning Gets A Patent
Influence Based Learning was the first of Netlab's innovations to be granted a patent. This latest patent makes two (and counting, stay tuned).
The Netlab development effort has led to a new method and device that produces learning factors for pre-synaptic neurons. The need to provide learning factors for pre-synaptic neurons was first addressed by backpropagation (Werbos, 1974). The new method differs from backpropagation in that its use is not restricted to feed-forward only networks. This new learning algorithm and method, called Influence Learning, is described here and in other entries in this blog (see Resources section below) .
Influence Learning is based on a simple conjecture. It assumes that those forward neurons that are exercising the most influence over responses to the immediate situation will be more attractive to pre-synaptic neurons. That is, for the purpose of forming or strengthening connections, active pre-synaptic neurons will be most attracted to forward neurons that are exercising the most influence.
Perhaps the most relevant thing to understand about this process is that these determinations are based entirely on activities taking place while signals (stimuli) are propagating through the network. Unlike backpropagation, there is no need for an externally generated error signal to be pushed through the network, in backwards order, and in ever-diminishing magnitudes.
Support In Biological Observations
While influence learning in artificial neural network simulations is new, it is based on biological observations and underpinnings from discoveries made over twenty years ago. One of the biological observations that led to the above speculation about attraction to the exercise of influence was discussed briefly in the book The Neuron: Cell and Molecular Biology.
An experiment described in that book shows what happens when you cut (or pharmacologically block) the axon of a target neuron. In that experiment the pre-synaptic connections to the target neuron began to retract after its axon was cut. That is, the axons making presynaptic connections to the modified neuron went away when it no longer made synaptic connections to its own post-synaptic neurons.
The book also described how, when the target neuron’s axon was unblocked (or grew back), the axons from presynaptic neurons immediately began to reform and re-establish connections with the target. Based on these observations, the following possibility was asserted.
"...Maintenance of presynaptic inputs may depend on a post-synaptic factor that is transported from the terminal back toward the soma."
The following diagram depicts these observations schematically.