About:
Exploring new approaches to machine hosted
neural-network simulation, and the science
behind them.
Your moderator:
John Repici
A programmer who is obsessed with giving experimenters
a better environment for developing biologically-guided
neural network designs. Author of
an introductory book on the subject titled:
"Netlab Loligo: New Approaches to Neural Network
Simulation". BOOK REVIEWERS ARE NEEDED!
Can you help?
"There is, I conceive, no contradiction in believing that mind is at once the cause of matter and of the development of individualised human minds through the agency of matter. "
— Alfred Russel Wallace
"Recent observations have thoroughly established that order in groups of small particles, easily visible under a low-power microscope, can be caused spontaneously by Brownian-like movement of smaller spheres that in turn is caused by random molecular motion." — from: a paper by Frank Lambert at Entropysite.
. . . . . . .
References:
Adams, M.; Dogic, Z.; Keller, S.L.; Fraden, S. Nature 1998, 393, 349-352 and references therein.
Laird, B. B. J. Chem. Educ. 1999, 76, 1388-1390.
Dinsmore, A. E.; Wong, D. T.; Nelson, P.; Yodh, A. G. Phys. Rev. Letters 1998, 80, 409-412.
Anybody who has ever come across stacked rocks while walking in the wilderness knows how easy it is to recognize consciousness when we experience it.
Why is something that's so easy to recognize, so hard to objectively describe?
Can we write an algorithm capable of recognizing consciousness as reliably as people do when we see those stacked rocks? Would writing such an algorithm help move us any closer to understanding, or at least defining consciousness?
While we're on Chalmers interviews, here are ten minutes of him talking about his "hard problem" of consciousness. His description of the hard problem has finally moved us forward, off of the obfuscatory kludge that was Turing's "test." Turing merely tested the ability of an algorithm to fool a person on the other side of the screen into thinking it was conscious. It has led to ever more complex implementations of programs like ELIZA, which, at their core, are the antithesis of consciousness.
Here, presented for your enjoyment. David Chalmers:
Semantically, as described by Chalmers in this paper, it is helpful to understand that there are many connotations, or types, of consciousness. First and foremost, there is phenomenological (a.k.a. "hard problem") consciousness. This is the one we all have a front row seat to, but have not been able to describe in a non-subjective way.
There are other types of consciousness as well. These are what Chalmers refers to as the "easy problem" forms of consciousness. These are the ones that often trip us up when talking about consciousness. Chalmers lists some examples of these:
the ability to discriminate, categorize, and react to environmental stimuli;
the integration of information by a cognitive system;
the reportability of mental states;
the ability of a system to access its own internal states;
the focus of attention;
the deliberate control of behavior;
the difference between wakefulness and sleep.
Reading philosophy doesn't make one any less of a scientist. After all, science is —itself— a philosophy.