About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
Any claim that asserts credibility based on peer review, should be held to the same level of trustworthiness as claims made by cheerleaders in support of their team.
“On a hot summer day a dog relieves itself on the lawn outside Pavlov's lab. In a few days, all but the hardiest blades of grass have turned brown and died. Learning has occurred in Pavlov's lawn. The dogs have their revenge.”
Scientists have made measures for observing spike-timing-dependent associative memory formation that are much more precise than previous measurement techniques. In the process, they have found that a fairly established proposition of the STDP (spike-timing-dependent-plasticity) theory may not always be correct.
Specifically, it has long been held that a spike preceding a spike on a related synapse would strengthen the association, while the same spike trailing a spike on the related synapse would cause the association to become weaker (i.e., tend to extinction). The experimenters, however, found that the connections in a specific class of excitatory neurons were strengthened, regardless of the firing order (leading or trailing) of the two connections.
While we're on Chalmers interviews, here are ten minutes of him talking about his "hard problem" of consciousness. His description of the hard problem has finally moved us forward, off of the obfuscatory kludge that was Turing's "test." Turing merely tested the ability of an algorithm to fool a person on the other side of the screen into thinking it was conscious. It has led to ever more complex implementations of programs like ELIZA, which, at their core, are the antithesis of consciousness.
Here, presented for your enjoyment. David Chalmers:
One of the primary problems that traditional connectionists, and their neural networks, have been unable to solve can be stated like this:
How do biological learners deal with the dichotomy of needing to provide extremely detailed responses to any given situation, while possessing non-infinite resources with which to hold all those details?
Netlab's take on this problem has been quite different than the traditionally espoused theories and solutions[Note].
In a nutshell, Netlab recognizes that natural neural networks don't try to hold every single detail ever experienced. Instead, like all biological solutions, they act as the ultimate realists, and make the best of their real circumstances. Since they can't keep every single detail they've ever learned about how to respond to each new situation, they, instead, adapt to each new situation. How they do that can be summed up in three steps:
Present-moment requires a blank-slate every time we encounter it. That is, it requires memories (connection-strengths) that form very quickly in response, and then decay very quickly when we no longer need them. The only alternative is to keep every tiny detail, of every response to past "present-moments" we've ever experienced.
In order for blank short-term weights to adapt and respond quickly to new experiences, we must draw upon longer-term learning. We have found, however, that a new response can't simply be inserted into existing responses represented in the strengths of connections between neurons. Each learned response maintained in long-term connections must be taught by being interleaved with other responses, where each response-presentation can only have a slight effect on long-term connections.
This problem is solved by breaking up the speed of learning —and decay— into two (or, likely, more) different temporal spaces. First, a short term weight-space, which decays very quickly, and uses responses begun by long term responses to promote very fast learning (think hand-over-hand prompting). — The other side of this coin is a set of long term connection-strengths —at respective connections— which learn a little bit from each short-term response re-learned, as needed, in the short-term weights.
. . . . . . .
Note
Though the worst of the church have been trying (perhaps a little too hard?) to re-write the recent history (past 20 years, or so), a cursory reading of the non-back-filled literature easily demonstrates that the field had been mostly oblivious about this solution, up until after multitemporal synapses were introduced. [Plagiarism Index], e.g.,
Complimentary Learning Systems — CLS
What else can one do, but document the behavior, and hope the truth eventually prevails?