<?xml version="1.0" encoding="utf-8" ?>

<rss version="2.0" 
   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
   xmlns:admin="http://webns.net/mvcb/"
   xmlns:dc="http://purl.org/dc/elements/1.1/"
   xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
   xmlns:wfw="http://wellformedweb.org/CommentAPI/"
   xmlns:content="http://purl.org/rss/1.0/modules/content/"
   >
<channel>
    
    <title>Loligo Blog - Science &amp; Tech</title>
    <link>https://www.standoutpublishing.com/Blog/</link>
    <description>Neural Networks &amp; Robotics</description>
    <dc:language>en</dc:language>
    <generator>Serendipity 2.1.6 - http://www.s9y.org/</generator>
    <pubDate>Wed, 26 Dec 2018 16:04:55 GMT</pubDate>

    

<item>
    <title>Learning is Ubiquitous -- 2</title>
    <link>https://www.standoutpublishing.com/Blog/archives/131-Learning-is-Ubiquitous-2.html</link>
            <category>Neural Networks</category>
            <category>Philosophy-Consc.</category>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/131-Learning-is-Ubiquitous-2.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=131</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=131</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    &lt;br /&gt;&lt;br /&gt;
Recently read:&lt;br /&gt;
&lt;blockquote&gt;&lt;br /&gt;
&quot;Recent observations have thoroughly established that order in groups of small particles, easily visible under a low-power microscope, can be caused spontaneously by Brownian-like movement of smaller spheres that in turn is caused by random molecular motion.&quot; &amp;mdash; from: &lt;a href=&quot;http://entropysite.oxy.edu/cracked_crutch.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;a paper by Frank Lambert at Entropysite.&lt;/a&gt;&lt;br /&gt;
&lt;/blockquote&gt;&lt;br /&gt;
&lt;span style=&quot;font-weight:bold&quot;&gt; . . . . . . .&lt;br /&gt;
References:&lt;/span&gt;&lt;br /&gt;
&lt;ul&gt;
   &lt;li&gt; Adams, M.; Dogic, Z.; Keller, S.L.; Fraden, S. Nature 1998, 393, 349-352 and references therein.
   &lt;li&gt; Laird, B. B. J. Chem. Educ. 1999, 76, 1388-1390.
   &lt;li&gt; Dinsmore, A. E.; Wong, D. T.; Nelson, P.; Yodh, A. G. Phys. Rev. Letters 1998, 80, 409-412.
   &lt;li&gt; Frenkel, D.  Phys. World 1993, 6, 24-25. 

   &lt;li&gt; See: &lt;a href=&quot;http://standoutpublishing.com/Blog/archives/120-Learning-Is-Ubiquitous.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Learning Is Ubiquitous&lt;/a&gt;

&lt;/ul&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;span style=&quot;font-style:italic&quot;&gt;*(will expand later)&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
 
    </content:encoded>

    <pubDate>Wed, 26 Dec 2018 14:05:02 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/131-guid.html</guid>
    <category>Consciousness</category>
<category>Memory</category>
<category>Mind-Brain</category>

</item>
<item>
    <title>Consciousness: If It's Easy, You May Be Doing It Wrong.</title>
    <link>https://www.standoutpublishing.com/Blog/archives/108-Consciousness-If-Its-Easy,-You-May-Be-Doing-It-Wrong..html</link>
            <category>Distraction</category>
            <category>Philosophy-Consc.</category>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/108-Consciousness-If-Its-Easy,-You-May-Be-Doing-It-Wrong..html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=108</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=108</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    There have been many studies and articles like these lately:&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt; &lt;a href=&quot;http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.108103&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Percolation Model of Sensory Transmission and Loss of Consciousness Under General Anesthesia&lt;/a&gt;
&lt;li&gt; &lt;a href=&quot;http://physics.aps.org/articles/v8/85&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Focus: How Anesthesia Switches Off Consciousness&lt;/a&gt;
&lt;li&gt; &lt;a href=&quot;https://www.newscientist.com/article/mg21228402-300-banishing-consciousness-the-mystery-of-anaesthesia/&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Banishing consciousness: the mystery of anaesthesia&lt;/a&gt;
&lt;li&gt; &lt;a href=&quot;http://www.huffingtonpost.com/deepak-chopra/consciousness-and-anesthe_b_719715.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Consciousness and Anesthesia with Stuart Hameroff&lt;/a&gt;
&lt;/ul&gt;&lt;br /&gt;
&lt;br /&gt;
This (imo) is why every good scientist should take the time to read and understand a little bit of philosophy.&lt;br /&gt;
&lt;br /&gt;
May I suggest Chalmers, &lt;a href=&quot;http://consc.net/papers/facing.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Facing Up to the Problem of Consciousness&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
Semantically, as described by Chalmers in this paper, it is helpful to understand that there are many connotations, or types, of consciousness. First and foremost, there is phenomenological (a.k.a. &quot;hard problem&quot;) consciousness. This is the one we all have a front row seat to, but have not been able to describe in a non-subjective way.&lt;br /&gt;
&lt;br /&gt;
There are other types of consciousness as well. These are what Chalmers refers to as the &quot;easy problem&quot; forms of consciousness. These are the ones that often trip us up when talking about consciousness. Chalmers lists some examples of these:&lt;br /&gt;
&lt;ul&gt;
  &lt;li&gt; the ability to discriminate, categorize, and react to environmental stimuli;
  &lt;li&gt; the integration of information by a cognitive system;
  &lt;li&gt; the reportability of mental states;
  &lt;li&gt; the ability of a system to access its own internal states;
  &lt;li&gt; the focus of attention;
  &lt;li&gt; the deliberate control of behavior;
  &lt;li&gt; the difference between wakefulness and sleep.
&lt;/ul&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Reading philosophy doesn&#039;t make one any less of a scientist. After all, science is &amp;mdash;itself&amp;mdash; a philosophy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 
    </content:encoded>

    <pubDate>Thu, 01 Oct 2015 12:30:00 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/108-guid.html</guid>
    <category>Consciousness</category>
<category>Mind-Brain</category>
<category>Neuroscience</category>
<category>Random-thoughts</category>

</item>
<item>
    <title>MIT Expands Brain Tissue for Better Microscope Imaging </title>
    <link>https://www.standoutpublishing.com/Blog/archives/104-MIT-Expands-Brain-Tissue-for-Better-Microscope-Imaging.html</link>
            <category>Biology</category>
            <category>Neural Networks</category>
            <category>News</category>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/104-MIT-Expands-Brain-Tissue-for-Better-Microscope-Imaging.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=104</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=104</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    &lt;br /&gt;&lt;span style=&quot;font-style:italic&quot;&gt;&lt;font size=&quot;+1&quot;&gt;New technique enables nanoscale-resolution microscopy of large biological specimens.&lt;/font&gt;&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;span style=&quot;font-style:italic&quot;&gt;Most microscopes work by using lenses to focus light emitted from a sample into a magnified image. However, this approach has a fundamental limit known as the diffraction limit, which means that it can’t be used to visualize objects much smaller than the wavelength of the light being used. For example, if you are using blue-green light with a wavelength of 500 nanometers, you can’t see anything smaller than 250 nanometers.&lt;br /&gt;
&lt;br /&gt;
“Unfortunately, in biology that’s right where things get interesting,”&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;a href=&quot;http://newsoffice.mit.edu/2015/enlarged-brain-samples-easier-to-image-0115&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;MIT team enlarges brain samples, making them easier to image&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;//www.youtube.com/embed/N66feuwmGNU&quot; frameborder=&quot;0&quot; allowfullscreen&gt;&lt;/iframe&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;!--&lt;br /&gt;
&lt;a href=&quot;http://www.pressreleasepoint.com/diaper-compound-may-expand-power-microscopes&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Diaper compound may expand power of microscopes&lt;/a&gt;&lt;br /&gt;
--&gt;&lt;br /&gt;
&lt;!-- Brain tissue can be treated with the same stuff used in baby diapers. When it gets wet, it absorbs the water and expands, causing the brain tissue to expand with it. Features that are normally too small for optical microscopes to image clearly (a fundamental limit known as the diffraction limit), can be blown up to a size that is suitable for light microscopy.&lt;br /&gt;
--&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 
    </content:encoded>

    <pubDate>Sat, 24 Jan 2015 23:37:04 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/104-guid.html</guid>
    <category>Biology</category>
<category>Imaging</category>
<category>Neuroscience</category>

</item>
<item>
    <title>Language and Schizophrenia Lecture -- Robert Sapolsky, Stanford</title>
    <link>https://www.standoutpublishing.com/Blog/archives/96-Language-and-Schizophrenia-Lecture-Robert-Sapolsky,-Stanford.html</link>
            <category>Biology</category>
            <category>Neural Networks</category>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/96-Language-and-Schizophrenia-Lecture-Robert-Sapolsky,-Stanford.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=96</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=96</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    &lt;br /&gt;&lt;br /&gt;
&lt;br /&gt;
This is a May 2010 lecture given by Professor Robert Sapolsky at Stanford University. The lecture is on schizophrenia, but starts with a very informative lecture on language. Specifically, it&#039;s about what is shaping up to be the genetic, bio-molecular correlates of grammar and language.&lt;br /&gt;
&lt;br /&gt;
&lt;span style=&quot;font-weight:bold&quot;&gt;Warning&lt;/span&gt;: For most lecturers you can kind-of do little fast-forward jumps during the video, resynchronizing your cognitive-following groove after each jump. This can shave some time off the lecture.&lt;br /&gt;
&lt;br /&gt;
With this guy, that&#039;s not so easy. He really loads you up with information. (I&#039;d love to see him do a lecture on autism).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;a href=&quot;https://www.youtube.com/watch?v=nEnklxGAmak&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;https://www.youtube.com/watch?v=nEnklxGAmak&lt;/a&gt;&lt;br /&gt;
&lt;center&gt;&lt;br /&gt;
&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/nEnklxGAmak&quot; frameborder=&quot;0&quot; allowfullscreen&gt;&lt;/iframe&gt;&lt;br /&gt;
&lt;/center&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;span style=&quot;font-weight:bold&quot;&gt;Viewing note:&lt;/span&gt; This starts with a wrap-up on a previous lecture on language and linguistics. The Schizophrenia lecture begins at around &lt;span style=&quot;font-weight:bold&quot;&gt;23:30&lt;/span&gt;.&lt;br /&gt;
 
    </content:encoded>

    <pubDate>Fri, 12 Apr 2013 04:08:00 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/96-guid.html</guid>
    <category>Biology</category>
<category>cognition-perception</category>
<category>Consciousness</category>
<category>Mind-Brain</category>
<category>Neuroscience</category>

</item>
<item>
    <title>Multitemporal Synapses and Our Perception of a Present Moment</title>
    <link>https://www.standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html</link>
            <category>Neural Networks</category>
            <category>Philosophy-Consc.</category>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=70</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=70</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    &lt;!-- &lt;img width=&quot;25%&quot; src=&quot;http://standoutpublishing.com/Site/ooRes/Blog/TimePassingMetaphor01.jpg&quot;&gt; --&gt;&lt;br /&gt;
&lt;a name=&quot;Overview&quot;&gt;&lt;/a&gt;&lt;br /&gt;
&lt;h2 class=&quot;SecHeader&quot;&gt;&lt;br /&gt;
Overview&lt;/h2&gt;&lt;table width=&quot;99%&quot;&gt;&lt;tr&gt;&lt;td width=&quot;23%&quot; align=&quot;left&quot; valign=&quot;top&quot;&gt;&amp;#160;&lt;/td&gt;
&lt;td width=&quot;60%&quot; align=&quot;left&quot; valign=&quot;top&quot;&gt;&lt;br /&gt;
       &lt;font size=&quot;+1&quot;&gt;&lt;b&gt;&amp;ldquo;&lt;/b&gt;&lt;/font&gt;&lt;font size=&quot;-1&quot;&gt;&lt;span style=&quot;font-style:italic&quot;&gt;Certainly, one of the most relevant and obvious characteristics of a present moment is that it goes away, and that characteristic must be represented internally.&lt;/span&gt;&lt;/font&gt;&lt;font size=&quot;+1&quot;&gt;&lt;b&gt;&amp;rdquo;&lt;/b&gt;&lt;/font&gt;
&lt;/td&gt;&lt;td width=&quot;19%&quot; align=&quot;left&quot; valign=&quot;top&quot;&gt;&amp;#160;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;table width=&quot;99%&quot;&gt;
&lt;tr&gt;
&lt;td width= &quot;65%&quot; align=&quot;left&quot; valign=&quot;top&quot;&gt;
Stated plainly&lt;span style=&quot;font-weight:bold&quot;&gt;&lt;a class=&quot;tlab&quot; href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#FootNotes&quot;&gt;[1]&lt;/a&gt;&lt;/span&gt;, the principle behind &lt;span style=&quot;font-style:italic&quot;&gt;&lt;a href=&quot;http://standoutpublishing.com/g/Multitemporal-Synapse.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;multitemporal synapses&lt;/a&gt;&lt;/span&gt; is that we maintain the blunt &amp;ldquo;residue&amp;rdquo; of past lessons in long-term connections, while everything else is quickly forgotten, and learned over again, in the instant. In other words, we &lt;span style=&quot;font-weight:bold&quot;&gt;&lt;span style=&quot;font-style:italic&quot;&gt;re-&lt;/span&gt;&lt;/span&gt;learn the detailed parts of our responses as we are confronted with each new current situation.&lt;span style=&quot;font-weight:bold&quot;&gt;&lt;a class=&quot;tlab&quot;href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#FootNotes&quot;&gt;[2]&lt;/a&gt;&lt;/span&gt;
 &lt;br /&gt;&lt;br /&gt;

One of the primary benefits of applying this principle, in the form of multitemporal synapses, is a neural network construct that is completely free of the usual problems associated with &lt;span style=&quot;font-style:italic&quot;&gt;&lt;a href=&quot;http://standoutpublishing.com/g/CatastrophicForgetting.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;catastrophic forgetting&lt;/a&gt;&lt;/span&gt;. When you eliminate catastrophic forgetting from your neural network structure, the practical result is the ability to develop networks that continuously learn from their surroundings, just like their natural counterparts.
&lt;/td&gt;
&lt;td width=&quot;35%&quot; align=&quot;center&quot; valign=&quot;top&quot;&gt;
&lt;br /&gt; 
&lt;span&gt;
&lt;table width=&quot;99%&quot;&gt;&lt;tr&gt;&lt;td width=&quot;99%&quot;&gt;
&lt;img style=&quot;width: 99%; max-width: 99%;&quot; src=&quot;http://standoutpublishing.com/Site/ooRes/Blog/TimePassingMetaphor01.jpg&quot;&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;/span&gt;&lt;br /&gt;
&lt;/td&gt;&lt;br /&gt;
&lt;/tr&gt;&lt;br /&gt;
&lt;/table&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;PageTOC&quot;&gt;&lt;br /&gt;
&lt;span style=&quot;font-weight:bold&quot;&gt;. . . . . . .&lt;/span&gt;&lt;br /&gt;
Contents&lt;br /&gt;
&lt;suppressLF&gt;
 &lt;ul&gt;

 &lt;li&gt;
    &lt;a class=&quot;tlab&quot; href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#BlogEntryTop&quot;
        &gt;Overview&lt;/a&gt;
    &lt;p /&gt;&lt;/li&gt;

 &lt;li&gt;
    &lt;a class=&quot;tlab&quot; href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#MajorProblem&quot;
        &gt;A Major Problem With Neural Networks&lt;/a&gt;
    &lt;p /&gt;&lt;/li&gt;

 &lt;li&gt;
    &lt;a class=&quot;tlab&quot; href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#ConstantLearning&quot;
        &gt;Constant Learning&lt;/a&gt;
    &lt;p /&gt;&lt;/li&gt;

&lt;!--
 &lt;li&gt;
    &lt;a class=&quot;tlab&quot; href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#MultitemporalSynapsesSimple&quot;
        &gt;The Term &amp;ldquo;Multitemporal Synapse&amp;rdquo; Is a Simplification&lt;/a&gt;
    &lt;p /&gt;&lt;/li&gt;
--&gt;

 &lt;li&gt;
    &lt;a class=&quot;tlab&quot; href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#MultitemporalConnectionStrengths&quot;
        &gt;Multitemporal Connection Strengths&lt;/a&gt;
    &lt;p /&gt;&lt;/li&gt;

    &lt;ul&gt;
       &lt;li&gt;
    &lt;a class=&quot;tlab&quot; href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#TwoTimeExplanation&quot;
        &gt;A Two Time-Span Explanation&lt;/a&gt;
       &lt;p /&gt;&lt;/li&gt;
  &lt;/ul&gt;&lt;/li&gt;

 &lt;li&gt;
    &lt;a class=&quot;tlab&quot; href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#Parsimony&quot;
        &gt;Does This Seem Wasteful to You?&lt;/a&gt;
     &lt;p /&gt;&lt;/li&gt;

 &lt;li&gt;
    &lt;a class=&quot;tlab&quot; href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#AcquisitionDelayVsActionDelay&quot;
        &gt;Acquisition Delay vs Action Delay&lt;/a&gt;
    &lt;p /&gt;&lt;/li&gt;

 &lt;li&gt;
    &lt;a class=&quot;tlab&quot; href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#ArrowOfTime&quot;
        &gt;Representing Now&#039;s Defining Characteristic&lt;/a&gt;
    &lt;p /&gt;&lt;/li&gt;

 &lt;li&gt; &lt;a class=&quot;tlab&quot; href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#CoolVisualization&quot;
        &gt;Summary - And An Interesting Visualization&lt;/a&gt;
    &lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/li&gt;
 
 &lt;li&gt;
    &lt;a class=&quot;tlab&quot; href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#SourcesAndResources&quot;
        &gt;Sources and Resources&lt;/a&gt;&lt;/li&gt;

 &lt;/ul&gt;
&lt;/suppressLF&gt;&lt;br /&gt;
&lt;/div&gt;  &lt;!-- PageTOC --&gt;&lt;br /&gt;
&lt;a name=&quot;MajorProblem&quot;&gt;&lt;/a&gt;&lt;br /&gt;
&lt;div class=&quot;JumpTop&quot;&gt;  &lt;sup&gt;  &lt;a href=&quot;#BlogEntryTop&quot;&gt;[top]&lt;/A&gt;  &lt;/sup&gt;&lt;/div&gt;&lt;br /&gt;
&lt;h2 class=&quot;SecHeader&quot;&gt;&lt;br /&gt;
A Major Problem With Neural Networks&lt;br /&gt;
&lt;/h2&gt;&lt;br /&gt;
&lt;br /&gt;
One major challenge with conventional neural network models has been in how to maintain connections that store enough intricate in-the-moment response-details to deal with any contingency that the system may encounter. Conventionally, such details would overwhelm long-term lessons stored in permanent connections-weights. This characteristic of conventional neural network models is known as &lt;span style=&quot;font-style:italic&quot;&gt;&lt;a href=&quot;http://standoutpublishing.com/g/The-Stability-Plasticity-Problem.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;The Stability Plasticity Problem&lt;/a&gt;&lt;/span&gt;, and is the underlying cause of &quot;&lt;span style=&quot;font-style:italic&quot;&gt;&lt;a href=&quot;http://standoutpublishing.com/g/catastrophicforgetting.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;catastrophic forgetting&lt;/a&gt;&lt;/span&gt;.&quot;&lt;br /&gt;
&lt;br /&gt;
When an artificial neural network that has learned a training set of responses, then encounters a new response to be learned, the result is usually ‘&lt;span style=&quot;font-style:italic&quot;&gt;&lt;a href=&quot;http://standoutpublishing.com/g/catastrophicforgetting.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;catastrophic forgetting&lt;/a&gt;&lt;/span&gt;’ of all earlier learning. Training on the new detail alters connections that are maintained by the network in a holistic (global) fashion. Because of this, it is almost certain that such a change will radically alter the outputs that were desired for the original training set. &lt;!-- In other words, global representation causes learning any one new pattern to interfere with the storage of all other responses that have been previously trained. --&gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;a class=&quot;block_level&quot; href=&quot;https://www.standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html#extended&quot;&gt;Continue reading &quot;Multitemporal Synapses and Our Perception of a Present Moment&quot;&lt;/a&gt;
    </content:encoded>

    <pubDate>Fri, 20 Apr 2012 00:44:00 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/70-guid.html</guid>
    <category>cognition-perception</category>
<category>Multitemporal-Synapse</category>
<category>Temporality</category>

</item>
<item>
    <title>Goodbye, World - Dennis Ritchie Creator of C Has Died</title>
    <link>https://www.standoutpublishing.com/Blog/archives/81-Goodbye,-World-Dennis-Ritchie-Creator-of-C-Has-Died.html</link>
            <category>Announcements</category>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/81-Goodbye,-World-Dennis-Ritchie-Creator-of-C-Has-Died.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=81</wfw:comment>

    <slash:comments>4</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=81</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    Dennis Ritchie, the creator of the C programming language, died on Saturday after battling a long illness. The C programming language, arguably, changed the world. It can be found at the heart of most modern computer applications, operating systems, and successor programming languages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#160;&amp;#160;&amp;#160;&lt;span style=&quot;font-weight:bold&quot;&gt;Dennis Ritchie&lt;/span&gt;&lt;br /&gt;
&amp;#160;&amp;#160;&amp;#160;&lt;span style=&quot;font-style:italic&quot;&gt;Creator of the C programming language&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#160;&amp;#160;&amp;#160;&lt;span style=&quot;font-weight:bold&quot;&gt;9 September 1941 &amp;mdash;  8 October 2011&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#160;&amp;#160;&amp;#160;&lt;img src=&quot;http://standoutpublishing.com/Site/gloss/oo/Image/RitchieDennis.jpg&quot;&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;span style=&quot;font-style:italic&quot;&gt;There&#039;s an &lt;a href=&quot;http://www.guardian.co.uk/technology/2011/oct/13/dennis-ritchie?newsfeed=true&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;obituary, and a very well researched history, at the Guardian&lt;/a&gt;&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From his book, &quot;The C Programming Language&quot;&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;
main()
{
        printf(&quot;hello, world\n&quot;);
}
&lt;/pre&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 
    </content:encoded>

    <pubDate>Wed, 12 Oct 2011 21:43:00 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/81-guid.html</guid>
    <category>History</category>

</item>
<item>
    <title>UC Berkeley - Scientists use brain imaging to reveal the movies in our mind</title>
    <link>https://www.standoutpublishing.com/Blog/archives/79-UC-Berkeley-Scientists-use-brain-imaging-to-reveal-the-movies-in-our-mind.html</link>
            <category>Biology</category>
            <category>Neural Networks</category>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/79-UC-Berkeley-Scientists-use-brain-imaging-to-reveal-the-movies-in-our-mind.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=79</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=79</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    Scientists at UC Berkeley have taken brain scans of subjects in an &lt;a href=&quot;http://standoutpublishing.com/g/fMRI.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;fMRI&lt;/a&gt; machine while they watched a movie clip. They then reconstructed the movie the subjects were watching using only the brain scan data, and a database of 18 million seconds of random video gleaned from the web.&lt;br /&gt;
&lt;br /&gt;
&lt;table width=&quot;95%&quot;&gt;&lt;tr&gt;
&lt;td width=&quot;45%&quot; align=&quot;left&quot; valign=&quot;top&quot;&gt;
First, they used &lt;a href=&quot;http://standoutpublishing.com/g/fMRI.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;fMRI&lt;/a&gt; imaging to measure brain activity in visual cortex as a person looked at several hours of movies. They then used those data to develop computational models that could predict the pattern of brain activity that would be elicited by any arbitrary movies (i.e., movies that were not in the initial set). Next, they used fMRI to measure brain activity elicited by a second set of movies that were also distinct from the first set. Finally, they used the computational models to process the elicited brain activity, and reconstruct the movies in the second set. 

&lt;/td&gt;
&lt;td width=&quot;55%&quot; align=&quot;center&quot; valign=&quot;top&quot;&gt;
&lt;img width=&quot;85%&quot; src=&quot;http://standoutpublishing.com/Site/ooRes/Blog/MindImageBird.jpg&quot;&gt;
&lt;/td&gt;
&lt;/tr&gt;&lt;/table&gt;&lt;br /&gt;
&lt;br /&gt;
The amount of new understanding this could allow us to gather about mind-brain correlates and &lt;a href=&quot;http://standoutpublishing.com/g/First-Person-Knowledge.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;first person knowledge&lt;/a&gt; should be considerable. If this lives up to the hype, a lot of new research ideas should come out of it. Keeping fingers crossed here.&lt;br /&gt;
&lt;br /&gt;
&lt;center&gt;&lt;br /&gt;
&lt;iframe width=&quot;420&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/KMA23JJ1M1o&quot; frameborder=&quot;0&quot; allowfullscreen&gt;&lt;/iframe&gt;&lt;br /&gt;
&lt;/center&gt;&lt;br /&gt;
&lt;span style=&quot;font-weight:bold&quot;&gt;In the above clip&lt;/span&gt; - the movie that each subject viewed while in the &lt;a href=&quot;http://standoutpublishing.com/g/fMRI.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;fMRI&lt;/a&gt; is shown in the upper left position. Reconstructions for three subjects are shown in the three rows at bottom. All these reconstructions were obtained using only each subject&#039;s brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli. The reconstruction at far left is the Average High Posterior (AHP). The reconstruction in the second column is the Maximum a Posteriori  (MAP). The other columns represent less likely reconstructions. The AHP is obtained by simply averaging over the 100 most likely movies in the reconstruction library. These reconstructions show that the process is very consistent, though the quality of the reconstructions does depend somewhat on the quality of brain activity data recorded from each subject. &lt;span style=&quot;font-style:italic&quot;&gt;[source: Gallant Lab (see resources below)]&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;a class=&quot;block_level&quot; href=&quot;https://www.standoutpublishing.com/Blog/archives/79-UC-Berkeley-Scientists-use-brain-imaging-to-reveal-the-movies-in-our-mind.html#extended&quot;&gt;Continue reading &quot;UC Berkeley - Scientists use brain imaging to reveal the movies in our mind&quot;&lt;/a&gt;
    </content:encoded>

    <pubDate>Sat, 24 Sep 2011 16:41:00 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/79-guid.html</guid>
    <category>Biology</category>
<category>cognition-perception</category>
<category>Imaging</category>
<category>Mind-Brain</category>
<category>Neuroscience</category>

</item>
<item>
    <title>True Random Number Generator Using Only Logic Gates?</title>
    <link>https://www.standoutpublishing.com/Blog/archives/78-True-Random-Number-Generator-Using-Only-Logic-Gates.html</link>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/78-True-Random-Number-Generator-Using-Only-Logic-Gates.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=78</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=78</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    &lt;br /&gt;&lt;br /&gt;
In the book, &lt;a href=&quot;http://standoutpublishing.com/Prod/Book/Netlabv03a/&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;&lt;span style=&quot;font-style:italic&quot;&gt;Netlab Loligo&lt;/span&gt;&lt;/a&gt;,  repeated calls are made for true random number generators (&lt;span style=&quot;font-style:italic&quot;&gt;TRNG&lt;/span&gt;s) to be included in all &lt;span style=&quot;font-style:italic&quot;&gt;CPU&lt;/span&gt;s, or at least in those that are intended for use in neural network applications. Naturally, I was very excited to see a headline about Intel having developed one with general purpose use in mind.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;h2 class=&quot;SecHeader&quot;&gt;&lt;br /&gt;
Intel&#039;s Low-Power &amp;ldquo;True&amp;rdquo; Random Number Generator&lt;br /&gt;
&lt;/h2&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IEEE has &lt;a href=&quot;http://spectrum.ieee.org/semiconductors/processors/behind-intels-new-randomnumber-generator/?utm_source=techalert&amp;utm_medium=email&amp;utm_campaign=090111&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;an article about a new &amp;ldquo;true&amp;rdquo; random number generator from Intel&lt;/a&gt; that has been 10 years in development.  Its primary advantage is that, while it is a true RNG, it operates entirely in digital mode using digital devices to obtain randomness from hardware. The slow, energy hogging, analog technology normally needed to glean randomness from Quantum phenomena has been eliminated. It has a few quirks, such as the need to force the outputs of its two mutex inverters high, and the seemingly unavoidable need to compensate using averaging techniques. I expand just a little on these quirks below.&lt;br /&gt;
&lt;br /&gt;
In the spirit of not critiquing something without also offering, at least, a sincere attempt at a solution, I&#039;ve forwarded a quick (if dirty) attempt at an &amp;ldquo;all logic gates&amp;rdquo; DTRNG (Digital True Random Number Generator) below. Only the equations were scratched out at the IEEE blog, I&#039;ve since produced a circuit diagram graphic, which is included here as well.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;a class=&quot;block_level&quot; href=&quot;https://www.standoutpublishing.com/Blog/archives/78-True-Random-Number-Generator-Using-Only-Logic-Gates.html#extended&quot;&gt;Continue reading &quot;True Random Number Generator Using Only Logic Gates?&quot;&lt;/a&gt;
    </content:encoded>

    <pubDate>Wed, 07 Sep 2011 19:42:00 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/78-guid.html</guid>
    <category>Electronics</category>
<category>Temporality</category>

</item>
<item>
    <title>Robot Arm Inventor Dies</title>
    <link>https://www.standoutpublishing.com/Blog/archives/76-Robot-Arm-Inventor-Dies.html</link>
            <category>Robotics</category>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/76-Robot-Arm-Inventor-Dies.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=76</wfw:comment>

    <slash:comments>1</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=76</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    &lt;span style=&quot;font-weight:bold&quot;&gt;George Charles Devol&lt;/span&gt;&lt;br /&gt;
&lt;span style=&quot;font-style:italic&quot;&gt;Inventor of Robot Arm&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#160;&amp;#160;&amp;#160;&amp;#160;&amp;#160;&amp;#160;&lt;img src=&quot;http://standoutpublishing.com/Site/ooRes/Blog/GeorgeCharlesDevol.jpg&quot;&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;span style=&quot;font-weight:bold&quot;&gt;20 February 1912  &amp;mdash; 11 August 2011&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
The self taught inventor of the robotic arm, which has become an icon of factory automation from Detroit to Asia, has died.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Story: &lt;a href=&quot;http://www.nytimes.com/2011/08/16/business/george-devol-developer-of-robot-arm-dies-at-99.html?_r=1&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Pioneer of the long arm of factory floor technology&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
 
    </content:encoded>

    <pubDate>Thu, 11 Aug 2011 13:59:00 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/76-guid.html</guid>
    <category>Actuators</category>
<category>History</category>
<category>Robotics</category>

</item>
<item>
    <title>Four New Species of Zombifying Ant Fungus Found</title>
    <link>https://www.standoutpublishing.com/Blog/archives/74-Four-New-Species-of-Zombifying-Ant-Fungus-Found.html</link>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/74-Four-New-Species-of-Zombifying-Ant-Fungus-Found.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=74</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=74</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    &lt;br /&gt;&lt;br /&gt;
&lt;table width=&quot;100%&quot;&gt;&lt;tr&gt;
&lt;td width=&quot;40%&quot; align=&quot;center&quot; valign=&quot;top&quot;&gt;
 &lt;img width=&quot;99%&quot; style=&quot;max-width: 99%&quot; src=&quot;http://standoutpublishing.com/Site/ooRes/Blog/CarpenterAntFungus.jpg&quot;&gt;
&lt;/td&gt;
&lt;td width=&quot;1%&quot;&gt;&amp;#160;&lt;/td&gt;
&lt;td width=&quot;59%&quot; align=&quot;left&quot; valign=&quot;top&quot;&gt;
From the press release:

&amp;ldquo;Once infected by spores, the worker ants ... leave the nest, find a small shrub and start climbing. The fungi directs all ants to the same kind of leaf: about 25 centimeters [(9.8 inches)] above the ground and at a precise angle to the sun (though the favored angle varies between fungi). How the fungi do this is a mystery.&amp;rdquo;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;br /&gt;
 [. . .&lt;a href=&quot;http://www.wired.com/wiredscience/2011/03/zombifying-ant-fungus/&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;article&lt;/a&gt;]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;span style=&quot;font-weight:bold&quot;&gt;The paper:&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;a href=&quot;http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0017024&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Hidden Diversity Behind the Zombie-Ant Fungus Ophiocordyceps unilateralis: Four New Species Described from Carpenter Ants in Minas Gerais, Brazil&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;SecHeader&quot;&gt;&lt;br /&gt;
Background &amp;amp; Further Reading. &lt;br /&gt;
&lt;/div&gt;&lt;br /&gt;
&lt;br /&gt;
A few interesting links to related stuff. Includes some strange and interesting background about ants.&lt;br /&gt;
&lt;ul&gt;
  &lt;li&gt; &lt;a href=&quot;http://www.youtube.com/watch?v=XuKjBIBBAL8&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Youtube short video about Cordyceps fungi&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt; &lt;a href=&quot;http://iridia.ulb.ac.be/~mdorigo/ACO/RealAnts.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Ant Colony Optimization Site - Behavior of real ants&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt; &lt;a href=&quot;http://www.antweb.org/&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;AntWeb&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt; &lt;a href=&quot;http://antbase.org/&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;AntBase&lt;/a&gt; (database of ants)&lt;/li&gt;
  &lt;li&gt; &lt;a href=&quot;http://bugguide.net/node/view/165&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Bug Guide&lt;/a&gt; &lt;br /&gt;&lt;br /&gt;&lt;/li&gt;

  &lt;li&gt; &lt;a href=&quot;http://theantroom.blogspot.com/2006/11/ant-death-spiral.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Ant Death Spiral&lt;/a&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;http://www.youtube.com/watch?v=mA37cb10WMU&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Ant Spiral Video&lt;/a&gt;&lt;/li&gt;
  &lt;/ul&gt;&lt;br /&gt;&lt;br /&gt;&lt;/li&gt;&lt;br /&gt;
&lt;br /&gt;
  &lt;li&gt; &lt;a href=&quot;http://www.rt.com/news/311894-wasp-spider-web-research/&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Killer wasps turn spiders into zombies, make them build super web for larvae (VIDEO)&lt;/a&gt; &lt;br /&gt;&lt;br /&gt;&lt;/li&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;/ul&gt;&lt;br /&gt;
&lt;br /&gt;
 
    </content:encoded>

    <pubDate>Tue, 17 May 2011 23:43:00 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/74-guid.html</guid>
    <category>Biology</category>
<category>Neuroscience</category>

</item>
<item>
    <title>Multitemporal Synapses Awarded a Patent</title>
    <link>https://www.standoutpublishing.com/Blog/archives/72-Multitemporal-Synapses-Awarded-a-Patent.html</link>
            <category>Announcements</category>
            <category>Neural Networks</category>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/72-Multitemporal-Synapses-Awarded-a-Patent.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=72</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=72</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    &lt;br /&gt;&lt;br /&gt;
&lt;table width=&quot;100%&quot;&gt;
&lt;tr&gt;
&lt;td width=&quot;50%&quot; valign=&quot;top&quot;&gt;
&lt;a href=&quot;http://standoutpublishing.com/Doc/o/Patents/07904398/07904398-01.pdf&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;&lt;img border=&quot;1px&quot; width=&quot;99%&quot; style=&quot;max-width: 99%;&quot; src=&quot;http://standoutpublishing.com/Site/ooRes/Blog/Patent_7904398.jpg&quot;&gt;&lt;/a&gt;
&lt;/td&gt;
&lt;td width=&quot;1%&quot;&gt;
    &amp;#160;
&lt;/td&gt;
&lt;td width=&quot;49%&quot; valign=&quot;top&quot;&gt;
&lt;h2 class=&quot;SecHeader&quot;&gt;
YEAH baby!
&lt;/h2&gt;
&lt;br /&gt;&lt;br /&gt;

A neural network innovation described in the book: &lt;a href=&quot;http://standoutpublishing.com/Prod/Book/Netlabv03a/&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Netlab Loligo&lt;/a&gt; has been awarded a patent (#7,904,398). &amp;mdash; Of the innovations described in the book, it is the second to receive letters patent (so far &lt;img src=&quot;https://www.standoutpublishing.com/Blog/templates/default/img/emoticons/smile.png&quot; alt=&quot;:-)&quot; class=&quot;emoticon&quot; /&gt; ). The patent is titled: 
&lt;br /&gt;&lt;br /&gt;

&lt;center&gt;
&lt;span style=&quot;font-weight:bold&quot;&gt;&amp;ldquo;Artificial Synapse Component Using Multiple Distinct Learning Means With Distinct Predetermined Learning Acquisition Times&amp;rdquo;&lt;/span&gt;
&lt;/center&gt;
&lt;br /&gt;


Patent titles serve mainly as an aid for future patent searchers. The patented innovation, along with the underlying concepts and principles that led to it are described and discussed in the book, where they are simply referred to as &amp;ldquo;&lt;a href=&quot;http://standoutpublishing.com/Blog/archives/64-Introducing-Multitemporal-Synapses.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Multitemporal Synapses&lt;/a&gt;.&amp;rdquo;
&lt;br /&gt;&lt;br /&gt;

The primary advantage imparted by the innovation is that it gives adaptive systems a present moment in time. This allows them to quickly and intricately adapt to the detailed response needs of their present situation, without cluttering up long term memories with the minute details of those responses.
&lt;br /&gt;&lt;br /&gt;


&lt;/td&gt;
&lt;td width=&quot;2%&quot;&gt;
    &amp;#160;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/table&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;h2 class=&quot;SecHeader&quot;&gt;&lt;br /&gt;
Sources &amp;amp; Resources&lt;br /&gt;
&lt;/h2&gt;&lt;br /&gt;
&lt;ul&gt;

&lt;li&gt; &lt;span style=&quot;font-weight:bold&quot;&gt;&lt;a href=&quot;http://standoutpublishing.com/Blog/archives/70-Multitemporal-Synapses-and-Our-Perception-of-a-Present-Moment.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Multitemporal Synapses and Our Perception of a Present Moment&lt;/a&gt;&lt;/span&gt;
  &lt;br /&gt;
  Stated simply, the theory behind multitemporal synapses is that we maintain the blunt essence of past lessons in long-term connections. Everything else is RE-learned in the moment.&lt;/li&gt;
 &lt;br /&gt;

&lt;li&gt; &lt;span style=&quot;font-weight:bold&quot;&gt;&lt;a href=&quot;http://standoutpublishing.com/Blog/archives/64-Introducing-Multitemporal-Synapses.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Multitemporal Synapses&lt;/a&gt;&lt;/span&gt;&lt;br /&gt;
This is a blog entry here that tries to describe Multitemporal Synapses. When time permits, I will try to provide a new blog entry with a clearer explanation using book excerpts (P.S. see above entry).  It will be specifically geared to laymen. If you are interested, please subscribe to the feed.&lt;/li&gt;
   &lt;br /&gt;

&lt;li&gt; &lt;span style=&quot;font-weight:bold&quot;&gt;&lt;a href=&quot;http://standoutpublishing.com/Blog/archives/58-Influence-Learning-Gets-A-Patent.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Influence Learning Gets A Patent&lt;/a&gt;&lt;/span&gt;&lt;br /&gt;
       Influence Based Learning was the first of Netlab&#039;s innovations to be granted a patent. This latest patent makes two (and counting, stay tuned). &lt;img src=&quot;https://www.standoutpublishing.com/Blog/templates/default/img/emoticons/smile.png&quot; alt=&quot;:-)&quot; class=&quot;emoticon&quot; /&gt;&lt;/li&gt;
   &lt;br /&gt;

&lt;li&gt; &lt;a href=&quot;http://standoutpublishing.com/Doc/o/Patents/07904398/07904398-01.pdf&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;[pdf] Patent Title Page&lt;/a&gt;&lt;/li&gt;
 &lt;br /&gt;

  &lt;br /&gt;



&lt;/ul&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 
    </content:encoded>

    <pubDate>Thu, 10 Mar 2011 15:15:00 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/72-guid.html</guid>
    <category>cognition-perception</category>
<category>Multitemporal-Synapse</category>
<category>Netlab</category>
<category>Patents</category>
<category>Temporality</category>

</item>
<item>
    <title>Biological Underpinnings of Influence Learning</title>
    <link>https://www.standoutpublishing.com/Blog/archives/71-Biological-Underpinnings-of-Influence-Learning.html</link>
            <category>Neural Networks</category>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/71-Biological-Underpinnings-of-Influence-Learning.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=71</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=71</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    &lt;br /&gt;&lt;br /&gt;
The Netlab development effort has led to a new method and device that produces learning factors for pre-synaptic neurons. The need to provide learning factors for pre-synaptic neurons was first addressed by backpropagation (Werbos, 1974). The new method differs from backpropagation in that its use is not restricted to feed-forward only networks. This new learning algorithm and method, called &lt;a href=&quot;http://standoutpublishing.com/Blog/archives/53-Introducing-Influence-Learning.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Influence Learning&lt;/a&gt;, is described here and in other entries in this blog (see &lt;a href=http://standoutpublishing.com/Blog/archives/71-Biological-Underpinnings-of-Influence-Learning.html#Resources&gt;Resources&lt;/a&gt; section below) .&lt;br /&gt;
&lt;br /&gt;
Influence Learning is based on a simple conjecture. It assumes that those forward neurons that are exercising the most influence over responses to the immediate situation will be more attractive to pre-synaptic neurons. That is, for the purpose of forming or strengthening connections, active pre-synaptic neurons will be most attracted to forward neurons that are exercising the most influence.&lt;br /&gt;
&lt;br /&gt;
Perhaps the most relevant thing to understand about this process is that these determinations are based entirely on activities taking place while signals (stimuli) are propagating through the network. Unlike backpropagation, there is no need for an externally generated error signal to be pushed through the network, in backwards order, and in ever-diminishing magnitudes.&lt;br /&gt;
&lt;!--&lt;br /&gt;
Put another way, the learning algorithms for pre-synaptic neurons assume that post synaptic neurons are doing exactly what they are supposed to be doing.  That is, pre-synaptic neurons assume that post synaptic neurons are learning. They do not care how, or do anything to check to see if the post-synaptic neurons are learning, they simply trust that they are and leave it at that.&lt;br /&gt;
--&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;h2 class=&quot;SecHeader&quot;&gt;&lt;br /&gt;
Support In Biological Observations&lt;br /&gt;
&lt;/h2&gt;&lt;br /&gt;
&lt;br /&gt;
While influence learning in artificial neural network simulations is new, it is based on biological observations and underpinnings from discoveries made over twenty years ago. One of the biological observations that led to the above speculation about attraction to the exercise of influence was discussed briefly in the book &lt;em&gt;&lt;span style=&quot;font-weight:bold&quot;&gt;&lt;a target=&quot;_blank&quot; href=&quot;http://www.amazon.com/gp/product/0195145232?ie=UTF8&amp;tag=amzsop-20&amp;linkCode=as2&amp;camp=1789&amp;creative=390957&amp;creativeASIN=0195145232&quot;&gt;The Neuron: Cell and Molecular Biology&lt;/a&gt;&lt;/span&gt;&lt;/em&gt;.&lt;br /&gt;
&lt;br /&gt;
An experiment described in that book shows what happens when you cut (or pharmacologically block) the axon of a target neuron. In that experiment the pre-synaptic connections to the &lt;em&gt;target&lt;/em&gt; neuron began to retract after its axon was cut. That is, the axons making presynaptic connections to the modified neuron went away when it no longer made synaptic connections to its own post-synaptic neurons.&lt;br /&gt;
&lt;br /&gt;
&lt;center&gt;&lt;br /&gt;
&lt;iframe width=&quot;420&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/-qJXkrCrPMM&quot; frameborder=&quot;0&quot; allowfullscreen&gt;&lt;/iframe&gt;&lt;br /&gt;
&lt;/center&gt;&lt;br /&gt;
&lt;br /&gt;
The book also described how, when the target neuron’s axon was unblocked (or grew back), the axons from presynaptic neurons immediately began to reform and re-establish connections with the target. Based on these observations, the following possibility was asserted.&lt;br /&gt;
&lt;br /&gt;
&lt;blockquote&gt;&lt;br /&gt;
&lt;em&gt;&lt;strong&gt;&quot;...Maintenance of presynaptic inputs may depend on a post-synaptic factor that is transported from the terminal back toward the soma.&quot;&lt;/strong&gt;&lt;/em&gt;&lt;br /&gt;
&lt;/blockquote&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following diagram depicts these observations schematically.&lt;br /&gt;
 &lt;a class=&quot;block_level&quot; href=&quot;https://www.standoutpublishing.com/Blog/archives/71-Biological-Underpinnings-of-Influence-Learning.html#extended&quot;&gt;Continue reading &quot;Biological Underpinnings of Influence Learning&quot;&lt;/a&gt;
    </content:encoded>

    <pubDate>Thu, 03 Mar 2011 21:28:00 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/71-guid.html</guid>
    <category>Biology</category>
<category>cognition-perception</category>
<category>Influence-Learning</category>
<category>Memory</category>
<category>Netlab</category>
<category>Neural-Networks</category>
<category>Neuroscience</category>
<category>Patents</category>

</item>
<item>
    <title>Introducing: Multitemporal Synapses</title>
    <link>https://www.standoutpublishing.com/Blog/archives/64-Introducing-Multitemporal-Synapses.html</link>
            <category>Neural Networks</category>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/64-Introducing-Multitemporal-Synapses.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=64</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=64</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    A set of constructs and methods introduced and described in the book: &lt;span style=&quot;font-style:italic&quot;&gt;&lt;a href=&quot;http://standoutpublishing.com/Prod/Book/Netlabv03a/&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Netlab Loligo&lt;/a&gt;&lt;/span&gt; will improve the ability of systems constructed with them to adapt to current short-term situations, and learn from those short-term experiences over the long term.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;h2 class=&quot;SecHeader&quot;&gt;&lt;br /&gt;
A New Learning Theory That Predicts A &amp;ldquo;Present Moment&amp;rdquo;&lt;br /&gt;
&lt;/h2&gt;&lt;br /&gt;
&lt;br /&gt;
How do we, as biological organisms, manage to keep so much finely detailed information in our brains about how to respond to any given situation? That is, how do we manage to keep countless tiny intricacies stored away in our &amp;ldquo;subconscious&amp;rdquo; ready to be called upon at just the right time, right when we need them in the present moment?&lt;br /&gt;
&lt;br /&gt;
According to this theory of learning, the answer to that question is: We don&#039;t.&lt;br /&gt;
&lt;br /&gt;
Instead, our long term connections&amp;mdash;those that immediately drive our responses at all times&amp;mdash;are only concerned with getting us started in any given &amp;ldquo;present.&amp;rdquo; Responses stored in long-term connections start us along a trajectory that makes it easier for us to learn whatever short-term, detailed responses are needed for any given detailed situation.&lt;br /&gt;
&lt;br /&gt;
Connections that drive short-term responses, on the other hand, form spontaneously in-the-moment, and quickly adapt to whatever present situation we currently find ourselves in. Just as significantly, connections driving short-term responses tend to dissipate as quickly as they form. This theory essentially says that each connection in the brain that drives responses (physical or internal) includes multiple distinct connection strengths, which each increase and decrease at different rates of speed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;a id=&quot;Note1Back&quot;&gt;&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;h2 class=&quot;SecHeader&quot;&gt;&lt;br /&gt;
How It&#039;s Done&lt;br /&gt;
&lt;/h2&gt;&lt;br /&gt;
&lt;br /&gt;
Multi-temporality is achieved in Netlab&#039;s simulation environment by providing multiple weights per a connection point (i.e., &lt;a href=&quot;http://standoutpublishing.com/g/Synapse.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;synapse&lt;/a&gt;), which are referred to as Multitemporal&lt;sup&gt;&lt;span style=&quot;font-weight:bold&quot;&gt;&lt;a href=&quot;http://standoutpublishing.com/Blog/archives/64-Introducing-Multitemporal-Synapses.html#Notes&quot;&gt;[Note 1]&lt;/a&gt;&lt;/sup&gt;&lt;/span&gt; synapses. &lt;a href=&quot;http://standoutpublishing.com/g/Multitemporal-synapse.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Multitemporal synapses&lt;/a&gt; employ multiple &lt;a href=&quot;http://standoutpublishing.com/g/weight.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;weights&lt;/a&gt;. Each of the multiple weights associated with a given synapse represents a connection strength, and can be set to acquire and retain its strength at a different rate from the others. The methods also specify &lt;a href=&quot;http://standoutpublishing.com/g/Weight-to-Weight-Learning.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Weight-To-Weight Learning&lt;/a&gt;, which is a means of teaching a given weight in the set of multiple weights, using the value of other weights from the same connection. Together these constructs provide all the functionality required to  model the theory of learning discussed above.&lt;br /&gt;
&lt;br /&gt;
Following is a graphic excerpted from the book: Netlab Loligo, which shows a neuron containing three different weights for each connection point. Each weight is given its own learning algorithms, with its own learning-rate, and forget-rate.&lt;br /&gt;
&lt;br /&gt;
&lt;center&gt;&lt;br /&gt;
&lt;img width=&quot;70%&quot; src=&quot;http://standoutpublishing.com/Site/oRes/Blog/MultiTemporalSynapsesDiagram01.jpg&quot;&gt;&lt;br /&gt;
&lt;/center&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;a class=&quot;block_level&quot; href=&quot;https://www.standoutpublishing.com/Blog/archives/64-Introducing-Multitemporal-Synapses.html#extended&quot;&gt;Continue reading &quot;Introducing: Multitemporal Synapses&quot;&lt;/a&gt;
    </content:encoded>

    <pubDate>Wed, 26 Jan 2011 00:21:00 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/64-guid.html</guid>
    <category>cognition-perception</category>
<category>Excerpts</category>
<category>Memory</category>
<category>Mind-Brain</category>
<category>Multitemporal-Synapse</category>
<category>Netlab</category>
<category>Neural-Networks</category>
<category>Patents</category>
<category>Temporality</category>

</item>
<item>
    <title>Influence Learning Gets A Patent</title>
    <link>https://www.standoutpublishing.com/Blog/archives/58-Influence-Learning-Gets-A-Patent.html</link>
            <category>Announcements</category>
            <category>Neural Networks</category>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/58-Influence-Learning-Gets-A-Patent.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=58</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=58</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    &lt;br /&gt;&lt;br /&gt;
&lt;table width=&quot;100%&quot;&gt;
&lt;tr&gt;
&lt;td width=&quot;35%&quot; valign=&quot;top&quot;&gt;
&lt;img src=&quot;http://standoutpublishing.com/Site/ooRes/Blog/USPTOSealSmaller.jpg&quot;&gt;
&lt;/td&gt;
&lt;td width=&quot;1%&quot;&gt;
    &amp;#160;
&lt;/td&gt;
&lt;td width=&quot;62%&quot; valign=&quot;top&quot;&gt;
&lt;h2 class=&quot;SecHeader&quot;&gt;
Woo HOOO!
&lt;/h2&gt;
&lt;br /&gt;&lt;br /&gt;

&lt;a href=&quot;http://standoutpublishing.com/Blog/archives/53-Introducing-Influence-Learning.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Influence Based Learning&lt;/a&gt;, one of two new learning methods described in the book &lt;a href=&quot;http://standoutpublishing.com/Prod/Book/Netlabv03a/&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Netlab Loligo&lt;/a&gt;, has just been awarded a United States Patent. The official title of the patent is:
&lt;br /&gt;&lt;br /&gt;

&lt;center&gt;
&lt;span style=&quot;font-weight:bold&quot;&gt;&amp;ldquo;Feedback-Tolerant Method And Device Producing Weight-Adjustment Factors For Pre-Synaptic Neurons In Artificial Neural Networks&amp;rdquo;&lt;/span&gt;
&lt;/center&gt;
&lt;br /&gt;

The title is a mouthful, primarily designed to help future patent searchers determine if their great idea has already been discovered and patented. It is fully described and discussed in the book, where it is simply referred to as &lt;a href=&quot;http://standoutpublishing.com/Blog/archives/53-Introducing-Influence-Learning.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Influence Learning&lt;/a&gt;.
&lt;br /&gt;&lt;br /&gt;

As the patent-title expresses, one of the benefits it imparts over existing learning algorithms, is that it is feedback-tolerant. It will work fine with the current-day feed-forward networks configured as &quot;slabs&quot;, but it also allows connecting neurons to pre-synaptic neurons as well. That is, it allows feedback, which means you don&#039;t have to configure your network with &quot;&lt;a href=&quot;http://standoutpublishing.com/g/Hidden-Layer.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;hidden layers&lt;/a&gt;&quot; anymore if you don&#039;t want to. You are free to use &lt;a href=&quot;http://standoutpublishing.com/Blog/archives/52-Brain-Wiring-Structure-How-About-a-Donut.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;any connectome you&#039;d like&lt;/a&gt;.
&lt;/td&gt;
&lt;td width=&quot;2%&quot;&gt;
    &amp;#160;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/table&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;ul&gt;

&lt;li&gt; &lt;a href=&quot;http://standoutpublishing.com/Doc/o/Patents/07814038/07814038-01.pdf&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;[pdf] Patent Title Page&lt;/a&gt;&lt;p /&gt;&lt;/li&gt;
&lt;!--
&lt;li&gt; &lt;a href=&quot;http://www.uspto.gov/web/patents/patog/week41/OG/html/1359-2/US07814038-20101012.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Patent Office - Official Gazette Entry &lt;span style=&quot;font-weight:bold&quot;&gt;[TEMPORARY LINK]&lt;/span&gt;&lt;/a&gt;&lt;p /&gt;&lt;/li&gt;
--&gt;
      &lt;li&gt;&lt;a href=&quot;http://standoutpublishing.com/Blog/archives/57-Neural-Networks-Backgrounder-Ce-nest-pas-une-lhistoire.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Neural Networks Backgrounder: Ce n&#039;est pas une l&#039;histoire&lt;/a&gt;&lt;br /&gt;
        A quick backgrounder on neural networks presented in a sketchy semi-historical format.
          &lt;/li&gt;

&lt;/ul&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 
    </content:encoded>

    <pubDate>Sun, 17 Oct 2010 13:31:00 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/58-guid.html</guid>
    <category>Influence-Learning</category>
<category>Netlab</category>
<category>Neural-Networks</category>
<category>Patents</category>

</item>
<item>
    <title>Introducing: Influence Learning</title>
    <link>https://www.standoutpublishing.com/Blog/archives/53-Introducing-Influence-Learning.html</link>
            <category>Neural Networks</category>
            <category>Pub notes</category>
            <category>Science &amp; Tech</category>
    
    <comments>https://www.standoutpublishing.com/Blog/archives/53-Introducing-Influence-Learning.html#comments</comments>
    <wfw:comment>https://www.standoutpublishing.com/Blog/wfwcomment.php?cid=53</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>https://www.standoutpublishing.com/Blog/rss.php?version=2.0&amp;type=comments&amp;cid=53</wfw:commentRss>
    

    <author>nospam@example.com (John R)</author>
    <content:encoded>
    &lt;br /&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;span style=&quot;font-style:italic&quot;&gt;&lt;span style=&quot;font-weight:bold&quot;&gt;&lt;a href=&quot;http://standoutpublishing.com/g/Influence-Learning.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Influence learning&lt;/a&gt;&lt;/span&gt;&lt;/span&gt; is one of two new learning algorithms that have emerged (so far) from the Netlab development effort. This blog entry contains a brief overview describing how it works, and some of the advantages it brings to the task of neural network weight-adjustment.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;h2 class=&quot;SecHeader&quot;&gt;&lt;br /&gt;
How It Works&lt;br /&gt;
&lt;/h2&gt;&lt;br /&gt;
&lt;br /&gt;
This learning method is based on the notion that&amp;mdash;like their collective counterparts&amp;mdash;neurons may be attracted to, and occasionally repulsed by, the exercise of influence by others. In the case of neurons, the &quot;others&quot; would be other neurons.  As simple as that notion sounds, it produces a learning method with a number of interesting benefits and advantages over the current crop of learning algorithms.&lt;br /&gt;
&lt;br /&gt;
A neuron using &lt;span style=&quot;font-style:italic&quot;&gt;&lt;span style=&quot;font-weight:bold&quot;&gt;&lt;a href=&quot;http://standoutpublishing.com/g/Influence-Learning.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;influence learning&lt;/a&gt;&lt;/span&gt;&lt;/span&gt; is not nosy, and does not concern itself with &lt;span style=&quot;font-weight:bold&quot;&gt;&lt;span style=&quot;font-style:italic&quot;&gt;how&lt;/span&gt;&lt;/span&gt; its post-synaptic (forward) neurons are learning. It simply trusts that their job is to learn, and that they are doing their job. In other words, a given neuron fully expects, and assumes that other neurons within the system are learning. Each one treats post-synaptic neurons that are exercising the most influence as role models for adjusting connection-strengths. The norm is for neurons to see influential forward neurons as positive role models, but neurons may also see influential forward neurons as negative role models.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;h2 class=&quot;SecHeader&quot;&gt;&lt;br /&gt;
It Is Simple&lt;br /&gt;
&lt;/h2&gt;&lt;br /&gt;
&lt;br /&gt;
As you might guess, the first benefit is simplicity. The method does not try to hide a lack of new ideas behind a wall of new computational complexity. It is a simple, new, method based on a simple, almost axiomatic, observation, and it can be implemented with relatively little computational power.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;h2 class=&quot;SecHeader&quot;&gt;&lt;br /&gt;
It Imposes No Restrictions On Feedback&lt;br /&gt;
&lt;/h2&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;span style=&quot;font-style:italic&quot;&gt;&lt;span style=&quot;font-weight:bold&quot;&gt;&lt;a href=&quot;http://standoutpublishing.com/g/Influence-Learning.html&quot; target=&quot;_blank&quot; class=&quot;bb-url&quot;&gt;Influence Learning&lt;/a&gt;&lt;/span&gt;&lt;/span&gt; is completely free of feedback restrictions. That is, network connection-structures may be designed with any type, or amount of feedback looping. The learning mechanism will continue to be able to properly adapt connection-strengths regardless of how complex the feedback scheme is. The types of feedback designers are free to employ include servo feedback, which places the outside world (or some network structure that is closer to the outside world) directly in the signaling feedback path.&lt;br /&gt;
&lt;br /&gt;
This type of &quot;servo-feedback&quot; is shown graphically in figure 6-5 of the book, which has been re-produced here.&lt;br /&gt;
&lt;center&gt;&lt;br /&gt;
&lt;img src=&quot;http://standoutpublishing.com/Res/Blog/BookFig6-5_450x500.gif&quot; border=&quot;0&quot;&gt;&lt;br /&gt;
&lt;/center&gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;a class=&quot;block_level&quot; href=&quot;https://www.standoutpublishing.com/Blog/archives/53-Introducing-Influence-Learning.html#extended&quot;&gt;Continue reading &quot;Introducing: Influence Learning&quot;&lt;/a&gt;
    </content:encoded>

    <pubDate>Tue, 07 Sep 2010 16:14:00 +0000</pubDate>
    <guid isPermaLink="false">https://www.standoutpublishing.com/Blog/archives/53-guid.html</guid>
    <category>cognition-perception</category>
<category>Influence-Learning</category>
<category>Memory</category>
<category>Netlab</category>
<category>Neural-Networks</category>
<category>Patents</category>

</item>

</channel>
</rss>
