Exploring new approaches to machine hosted
neural-network simulation, and the science
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
"Recent observations have thoroughly established that order in groups of small particles, easily visible under a low-power microscope, can be caused spontaneously by Brownian-like movement of smaller spheres that in turn is caused by random molecular motion." — from: a paper by Frank Lambert at Entropysite.
. . . . . . .
Adams, M.; Dogic, Z.; Keller, S.L.; Fraden, S. Nature 1998, 393, 349-352 and references therein.
Laird, B. B. J. Chem. Educ. 1999, 76, 1388-1390.
Dinsmore, A. E.; Wong, D. T.; Nelson, P.; Yodh, A. G. Phys. Rev. Letters 1998, 80, 409-412.
Semantically, as described by Chalmers in this paper, it is helpful to understand that there are many connotations, or types, of consciousness. First and foremost, there is phenomenological (a.k.a. "hard problem") consciousness. This is the one we all have a front row seat to, but have not been able to describe in a non-subjective way.
There are other types of consciousness as well. These are what Chalmers refers to as the "easy problem" forms of consciousness. These are the ones that often trip us up when talking about consciousness. Chalmers lists some examples of these:
the ability to discriminate, categorize, and react to environmental stimuli;
the integration of information by a cognitive system;
the reportability of mental states;
the ability of a system to access its own internal states;
the focus of attention;
the deliberate control of behavior;
the difference between wakefulness and sleep.
Reading philosophy doesn't make one any less of a scientist. After all, science is —itself— a philosophy.
New technique enables nanoscale-resolution microscopy of large biological specimens.
Most microscopes work by using lenses to focus light emitted from a sample into a magnified image. However, this approach has a fundamental limit known as the diffraction limit, which means that it can’t be used to visualize objects much smaller than the wavelength of the light being used. For example, if you are using blue-green light with a wavelength of 500 nanometers, you can’t see anything smaller than 250 nanometers.
“Unfortunately, in biology that’s right where things get interesting,”
This is a May 2010 lecture given by Professor Robert Sapolsky at Stanford University. The lecture is on schizophrenia, but starts with a very informative lecture on language. Specifically, it's about what is shaping up to be the genetic, bio-molecular correlates of grammar and language.
Warning: For most lecturers you can kind-of do little fast-forward jumps during the video, resynchronizing your cognitive-following groove after each jump. This can shave some time off the lecture.
With this guy, that's not so easy. He really loads you up with information. (I'd love to see him do a lecture on autism).
“Certainly, one of the most relevant and obvious characteristics of a present moment is that it goes away, and that characteristic must be represented internally.”
Stated plainly, the principle behind multitemporal synapses is that we maintain the blunt “residue” of past lessons in long-term connections, while everything else is quickly forgotten, and learned over again, in the instant. In other words, we re-learn the detailed parts of our responses as we are confronted with each new current situation.
One of the primary benefits of applying this principle, in the form of multitemporal synapses, is a neural network construct that is completely free of the usual problems associated with catastrophic forgetting. When you eliminate catastrophic forgetting from your neural network structure, the practical result is the ability to develop networks that continuously learn from their surroundings, just like their natural counterparts.
One major challenge with conventional neural network models has been in how to maintain connections that store enough intricate in-the-moment response-details to deal with any contingency that the system may encounter. Conventionally, such details would overwhelm long-term lessons stored in permanent connections-weights. This characteristic of conventional neural network models is known as The Stability Plasticity Problem, and is the underlying cause of "catastrophic forgetting."
When an artificial neural network that has learned a training set of responses, then encounters a new response to be learned, the result is usually ‘catastrophic forgetting’ of all earlier learning. Training on the new detail alters connections that are maintained by the network in a holistic (global) fashion. Because of this, it is almost certain that such a change will radically alter the outputs that were desired for the original training set.
Dennis Ritchie, the creator of the C programming language, died on Saturday after battling a long illness. The C programming language, arguably, changed the world. It can be found at the heart of most modern computer applications, operating systems, and successor programming languages.
Dennis Ritchie Creator of the C programming language
Scientists at UC Berkeley have taken brain scans of subjects in an fMRI machine while they watched a movie clip. They then reconstructed the movie the subjects were watching using only the brain scan data, and a database of 18 million seconds of random video gleaned from the web.
First, they used fMRI imaging to measure brain activity in visual cortex as a person looked at several hours of movies. They then used those data to develop computational models that could predict the pattern of brain activity that would be elicited by any arbitrary movies (i.e., movies that were not in the initial set). Next, they used fMRI to measure brain activity elicited by a second set of movies that were also distinct from the first set. Finally, they used the computational models to process the elicited brain activity, and reconstruct the movies in the second set.
The amount of new understanding this could allow us to gather about mind-brain correlates and first person knowledge should be considerable. If this lives up to the hype, a lot of new research ideas should come out of it. Keeping fingers crossed here.
In the above clip - the movie that each subject viewed while in the fMRI is shown in the upper left position. Reconstructions for three subjects are shown in the three rows at bottom. All these reconstructions were obtained using only each subject's brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli. The reconstruction at far left is the Average High Posterior (AHP). The reconstruction in the second column is the Maximum a Posteriori (MAP). The other columns represent less likely reconstructions. The AHP is obtained by simply averaging over the 100 most likely movies in the reconstruction library. These reconstructions show that the process is very consistent, though the quality of the reconstructions does depend somewhat on the quality of brain activity data recorded from each subject. [source: Gallant Lab (see resources below)]