About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
One of Netlab's synapse mechanisms and structures is based loosely on a silent-synapse hypothesis of long- vs short-term memory, in which short and long both occur at the same connection-point (synapse). Netlab includes a learning method based on this as well, called weight-to-weight learning. The silent synapse phenomenon has been observed for quite some time in biological studies, and there has been very good evidence to explain some of the underlying mechanisms responsible for the observation. Still there have been many missing pieces to the puzzle.
An Interesting Study
Recently there has been a development that seems to give evidence and details to a related theory/hypothesis of how synapse strength may be mediated through a molecular motor called Myosin II on the post-synaptic side. So suggests one study out of The Scripps Research Institute.
It has been thought for some time now (see background information below) that molecular motors resembling those used to produce movement in muscle tissue, may be a major player in the processes mediating the transfer of memory-connections from short-, to long-term on the post-synaptic side. We now seem to be getting to more detailed understanding of the mechanisms underlying these phenomena. Like so many brain constructs, there does seem be a great deal of variety.
The vernacular that seems to be emerging is that these mechanisms "stabilize" the connection strengths. This might still be jumping the gun on the conclusions, but it is not a bad way to think about it for now.
Gavin Rumbaugh
Are you ready to Rumbaugh? i'm sure he's never heard that joke before
Related/Background:
[pdf] Remodeling the Plasticity Debate:
The Presynaptic Locus Revisited
A really interesting paper from 2006 published at the journal Physiology. From its description: "The cellular mechanisms contributing to long-term potentiation and activity-induced formation of glutamatergic synapses have been intensely debated. Recent studies
have sparked renewed interest in the role of presynaptic components in these processes. Based on the present evidence, it appears likely that long-term plasticity utilizes both pre- and postsynaptic expression mechanisms."
Actuators (scroll to bottom)
A blog-post here about actuators. Mostly robotics, but a section at the bottom has a couple of nice videos
describing the function and structure of animal muscles.
As a programmer I find it very satisfying when a phony false choice is taken down. Chris Chatham, who maintains Developing Intelligence blog looks like he's hot on the trail of one.
He provides a very good explanation for the apparent disagreement in the experimental data. His conclusion? The two aren't mutually exclusive. (thank you Mr. Chatham)
So, how does this work? Is the brain just big enough to accommodate two different mechanisms? Possibly, but Chatham also explores a distinct possibility that the same underlying mechanisms may be responsible for both types of development. It turns out there is a bit of good reason to think it is the latter.
The following excerpts from Chapter 4 of the book ("Our Metaphysical Tool Shed") may help to clarify the point of this post.
Anti-Razor1
It would probably be sufficient to simply express the weighted result as a ratio, but for now that's just one option. As our understanding grows, we may find that our maintaining two separate sums and many of the other values, is the metaphorical equivalent of gluing feathers to the wings of a flying machine. . .
. . .
1. This sub-heading is a reference to Ockham's razor, which is almost always mis-characterized in common usage. William of Ockham's original advice is based on sound, logical, reasoning, while the common mis-characterization is essentially a fashion statement. Here, however, I argue that programmers must give experimenters the ability to define their own (real) razors, and so should not mandate them in the modeling tools we provide. That is, we should give experimenters more, and let them decide for themselves how to divide and conquer those “more” into “fewer”.
Many of the ANN modeling tools available today are merely paint-by-number kits, which allow experimenters to try out solutions that other people have worked out and documented in formulaic recipes. Netlab is decidedly NOT among these offerings.
The known neural network formulas, for example, each represent somebody else's abstraction, and reduction of the observation data. The function produced is essentially a workable, defined, recipe for creating an effect that is similar to observed behaviors. In short, the person who came up with the formula in the first place was the one who did all the heavy lifting.
Experimenters should be given tools that let them find and test their own theories, and their own ways to pair down and represent what they think is most essential about what's going on in the problem space.
Netlab
To state it simply, Netlab is built on the proposition that experimenters should be able to try out their own ideas, and not merely find new ways to use other people's ideas.
One very good way for a programmer to achieve such a goal in his design, is to reverse engineer the existing formulas. That is, for each existing formula, produce a simulation environment that would allow somebody to create the formula for the first time, were it not already known. This is one of the design philosophies underlying Netlab's software specification.
Some Practical Examples
To this point it has been a rather abstract post. What follows is mostly a list of links into the glossary (I think). Now that the I.P. protection is starting to come through, I'll be able to describe this stuff more openly, and more deeply. As new documentation becomes available I'll either try to update this section with links, or copy this section into a more complete discussion of the practical mechanisms provided for achieving the simple goal described above.
Chemicals
One of the abstractions provided by Netlab is the notion of chemicals. Neurons, beside producing output values on their axons based on a variety conditions (direct and indirect), are also capable of producing chemicals. The chemicals, much like the axon-level, can be specified to be produced in various concentration-levels based on a variety of environmental factors. The factors that lead to the production of a chemical are specified by the designer and can include stimulus on the neuron's synapses by other axons, other chemical influences in the vicinity of the neuron, or globally present chemicals (among many other factors). To specify a new chemical the designer simply comes up with a name for it.
No characteristics are explicitly specified for a given designer-named chemical. The properties and characteristics of any given named chemical are purely a biproduct of how other objects in the environment (usually other neurons) have been specified to respond/react to them. Responses to any given chemical can be different for different individual instances of a neuron, or for different classes of a neuron (this is simplified, "neuron" is really "object" and can include other super-types besides neurons),
Spacial and Temporal Distances
Netlab's description language provides a way to modularize the design and construction of neural networks. It allows experimenters to produce components, called "units", which contain other components, such as neurons and previously designed units.
The modular construct used to overcome complexity at design time, is preserved in the Netlab run-time, giving Netlab's networks an abstraction of volumetric space, which can be used as a framework when representing both spacial and temporal phenomena.
Pathfinding
Once you have a viable abstraction for representing temporal and volumetric chemical gradients, you can then begin to define all kinds of useful mechanisms based on the influence (e.g., repulsive or attractive) your chemical concentrations have on them. For example, like all neural network packages, Netlab includes the traditional Latent Connections. These are the static connections you determine at design time. Whether or not they actually develop is based on changes to their strengths, but the connection will always be between the same two neurons, which were specified at design time.
Netlab is able to go farther, allowing designers to specify Receptor Pads which are areas on a neuron's synapse-space that can make connections with any other neuron in the network while it is running. Other neurons put out something called a growth cone. Among other things, dynamic connections are facilitated at run time based on affinities or aversions to named chemicals specified for both the growth-cones, and the receptor pads. This allows, for example, a given class of growth cone to be defined to "desire", "seek-out", and "find" its perfect receptor pad, dynamically, as the network runs. It is good to mention again here, that chemicals influencing these movement decisions by growth-cones are also being produced dynamically, and their concentrations will be based on factors that are products of the running network within its dynamic environment.
No Limits On Feedback
A new patented learning method has been developed that completely eliminates past restrictions placed on types or amounts of feedback employed in the structure of the network. Beside just being great for general feedback that may want to span multiple local loops at times, this is also very nice for servo-structures, which put the outside world directly in the feedback path. This allows for representing the outside world as a the correction-factors that must be adapted, in order to correct for outside forces. In essence, the world's complex and chaotic stimuli become a transfer function that sits directly in the network's feedback loop. The network, then performs the function of correcting for inconsistencies in the feedback function, which it can then learn.
A recent USC study applies a new technique that allows researches to more closely map the brain's wiring. One goal of the study is to better clarify our current understanding of the connection-structure of brains. Also, to try and settle the raging "It's an internetwork" / "It's a hierarchical pyramid" debate.
The Netlab abstraction is designed to facilitate a similar, but slightly different concept of brain wiring-structure, which is visually depicted in the cover art of the book:
The above diagram should be seen as a cross-section through a sphere, so the word "donut" in this entry-title takes some license. The interior/exterior connection-model, as depicted, does seem to find—at least passing— observational support in the USC study, e.g.,:
"The circuits showed up as patterns of circular loops, suggesting that at least in this part of the rat brain, the wiring diagram looks like a distributed network."
There's an interesting article at the Talking Brains blog that gets our bearings and discusses our current understanding of the relationship between Broca's area and speech. If you are still of the old notion, that it is the area that —working with Wernicke's area— is singularly responsible for syntactical construction of sentences (as I was), this will be a worthwhile read:
The above blog post gives a good overview with a nice sequence-line of how we got from the old understanding to the new (it has been a progression). It then puts a nice bow on the whole thing by describing a new study that provides strong observational evidence for the notion that the anterior temporal lobe has—at least—something to say about processing language and grammatical structure.
IMO
This is just my take (which could be very wrong), but I think what has been going on here is that understanding of the brain has been refining our understanding of linguistics, which has in turn, been refining our understanding of the brain. . . But that situation may be changing.
Grammatical rules, while being a nice way to talk about sentences, may not be something for which there is a directly correlated, and nameable brain structure. Again, it's just my hunch, but the connections between verbal communication and things like metaphor, and even dance (with the help of the brain constructs now believed to effect metaphor) seem to be part of what is becoming a much more complete understanding of the processes underlying speech. Our facilities for making metaphorical-conceptual connections between verbal communications, other forms (modes) of stimulus, and other forms of physical expression, seem to be emerging as the underlying causes of grammatical structure.
As our knowledge of the underlying brain mechanisms grows, sentence-structure is looking more like the old clockwork models of the solar system and universe. It is an abstract system of notation which was designed and developed to let us represent and model observable characteristics of the language itself. The language being modeled, however, was merely the end result of underlying processes that were (like the laws of gravity and motion in days past) totally hidden from us.
Understanding of language structure will be updated by our—now exploding—understanding of its causes. Of that I have no doubt. Right now, however, there seems to be a non-linear jump in what we're learning about the brain processes that lead to grammatical sentence structure.