About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
New technique enables nanoscale-resolution microscopy of large biological specimens.
Most microscopes work by using lenses to focus light emitted from a sample into a magnified image. However, this approach has a fundamental limit known as the diffraction limit, which means that it can’t be used to visualize objects much smaller than the wavelength of the light being used. For example, if you are using blue-green light with a wavelength of 500 nanometers, you can’t see anything smaller than 250 nanometers.
“Unfortunately, in biology that’s right where things get interesting,”
Using their recently developed imaging technique, Stanford is able to make movies of journeys through a three-dimensional brain. The imaging technique produces features with functional attributes. As you travel through the three-dimensional world produced, you are able to discern structural features such as synapses, dendrites, and axons, while also seeing the types and characteristics of those features. I highly recommend you use full-screen mode to view this journey.
Here are some related links I have found interesting while surfing the Internet. These have been laying around for a while, so this entry—designed to clear out some cobwebs— may contain some stale data. I've tossed some, but others may be interesting to you, as well.
[yt] TEDx Talk on Why We Feel Pain by an Interesting Talker (Lorimer Moseley)
A discussion of how the possibility of pain is transmitted, and how those possibilities are evaluated by the brain when determining whether something should be perceived as pain... or not. I do agree with the basic premise, but not sure about that first example. It would be nice to see some experimental confirmation on that one.
[yt]Severed Corpus Callosum
A Scientific American (Frontiers) segment with Alan Alda and Dr. Michael Gazzaniga
Slime Mold
This concept is discussed in the book using a "Seven Step Explanation" (in the chapter on Consciousness). Breaking it down into a pithy statement for you: Adaptation is required to produce neurons — Neurons are NOT required to produce adaptation.
This is a May 2010 lecture given by Professor Robert Sapolsky at Stanford University. The lecture is on schizophrenia, but starts with a very informative lecture on language. Specifically, it's about what is shaping up to be the genetic, bio-molecular correlates of grammar and language.
Warning: For most lecturers you can kind-of do little fast-forward jumps during the video, resynchronizing your cognitive-following groove after each jump. This can shave some time off the lecture.
With this guy, that's not so easy. He really loads you up with information. (I'd love to see him do a lecture on autism).
“Certainly, one of the most relevant and obvious characteristics of a present moment is that it goes away, and that characteristic must be represented internally.”
Stated plainly[1], the principle behind multitemporal synapses is that we maintain the blunt “residue” of past lessons in long-term connections, while everything else is quickly forgotten, and learned over again, in the instant. In other words, we re-learn the detailed parts of our responses as we are confronted with each new current situation.[2]
One of the primary benefits of applying this principle, in the form of multitemporal synapses, is a neural network construct that is completely free of the usual problems associated with catastrophic forgetting. When you eliminate catastrophic forgetting from your neural network structure, the practical result is the ability to develop networks that continuously learn from their surroundings, just like their natural counterparts.
One major challenge with conventional neural network models has been in how to maintain connections that store enough intricate in-the-moment response-details to deal with any contingency that the system may encounter. Conventionally, such details would overwhelm long-term lessons stored in permanent connections-weights. This characteristic of conventional neural network models is known as The Stability Plasticity Problem, and is the underlying cause of "catastrophic forgetting."
When an artificial neural network that has learned a training set of responses, then encounters a new response to be learned, the result is usually ‘catastrophic forgetting’ of all earlier learning. Training on the new detail alters connections that are maintained by the network in a holistic (global) fashion. Because of this, it is almost certain that such a change will radically alter the outputs that were desired for the original training set.
The McGurk effect is a perception illusion, which shows how our perception of reality can be affected by interactions between multiple senses. The presentation of the McGurk effect demonstrated in the following video also shows, convincingly, that our visual processes can completely override our auditory perceptions of speech — at least in certain circumstances.
In the above video, you will see the speaker's lips form an 'f'-sound. You will “hear” an 'f'-sound even though the actual sound being produced is a 'b'-sound (dubbed in over the video).
In this video, the 'f' perception reported by your eyes completely overrides the 'b' perception reported by your ears. Can we conclude, from this, that visual processing in the brain is given full priority over auditory processing?
Linguists have recently discovered [1] that almost all words are metaphorical at their base, and some people (e.g., me) posit that they all are. Though speculative, it is at least conceivable that even the sub-language signaling in the brain, which eventually leads to language, is also metaphorical. Consider that the bell may become a metaphor for food in the mind of Pavlov's dog.
Language is also able to relate ambiguity about the concepts it conveys. The word “life,” for example, can mean life-biology, or life-consciousness. Up until now, it has been perfectly acceptable to use these two meanings interchangeably. There simply has never been an instance of consciousness that existed outside of a biological body — at least none that we could directly experience with our physical senses.