Sometimes, our language's very helpful ability to convey ambiguity turns on us. Suddenly, we find ourselves able to begin seriously discussing machines that do consciousness, and the ambiguity, which was once such a helpful placeholder, may now be holding the discussion back. For an example, consider the term “Artificial Life” 
. It clearly invokes the Life-Consciousness
aspect of the word life. Right now, though, we're all just groping around in the dark to discern the shape of the surrounding cloud of related concepts. At this stage, our use of terms like “artificial life” (to give one example) may be producing more confusion than clarity.
“The first step towards wisdom is to call things by their proper names”
The mind/brain discussion sits at the edge of what we know and what we don't know. At this hairy edge, one of the primary challenges is to find decent well-defined labels for emerging concepts. Ideally, the labels we use should represent consistent relationships to other, known, concepts.
The process of picking better labels is all
opinion at this point and that could make for some lively discussions. This suggests a really strong need for civility and humility, and in that spirit, I'm just throwing out the following suggestion as something for you to think about.
Please give some thought to one term that has served the discussion well, but is now (imo) starting to produce more confusion than clarity. That term is Artificial Intelligence 
- Artificial: The word Artificial has been a problem for some time. It seems to have led to ever more spectacular implementations of things like ELIZA , which are designed merely to fool the human on the other end of the terminal into thinking it's a conscious entity (those who properly ask “is there a difference?” should search John Searle).
- Intelligence: Okay. I get it, nobody wants to use the word “consciousness” because nobody can give a self-evidently clear definition of the word when asked. But, doesn't the word intelligence —as we're using it in this phrase— suffer the same problem?
I personally like the term “Machine Consciousness
” as a replacement. It seems to be the best use of two words, each representing and conveying bundles of concepts and ambiguities. Most importantly, the concepts and ambiguities seem to be representative of what we do know, without imposing hasty assumptions about what we don't know.
But there's another problem lurking in the machine consciousness label. The word consciousness approaches the mystical. That's a hot iron some may not want to touch... though the reasons, today, are probably more dogmatic than rational.
- Scrap the term Artificial Intelligence
- Replace it with Machine Consciousness
As stated, Machine Consciousness
seems to be the best fit for our current understanding. It is the preferred term, and the one that has been used, within the Netlab effort. It is, in fact, the title of chapter three in the Netlab book. Notwithstanding concerns about present day dogmatic sensibilities, it also seems to be getting some decent traction out in the marketplace of ideas.
Examples of variations of this term include “MC,” “Machine Based Consciousness,” and “Machine Hosted Consciousness.”
As an alternative replacement term for AI, a good consideration might be: Adaptive Automation
. This term is descriptively accurate, which makes it (imo) a good one. The problem, however, is that it has come to be used as the label for adaptive user interfaces. That is a different discipline, but it is similar enough that the label could cause considerable confusion.
“...an adaptive system—in order to be sincerely considered adaptive—must be capable of continuously learning while it interacts with its environment.”
Parenthetically, even if the term 'adaptive automation' is not used, there is still a need for a more deliberate standard to express what it means for a system to be adaptive. May I suggest the above definition (from the book
There's a word buried within the above definition that has been frequently misused. That word is “continuously” (not merely “continually”). Put simply, in order for something to be continuous, it must necessarily
be happening right now. If something happens often but not necessarily at all times, it is merely continual.
Okay. That has pretty much stepped on every toe in the room.
Please accept my apology if any protocols were violated. That's the thing about adaptive behavior. Ya gotta start somewhere.
- Machine Consciousness
- Adaptive Automation
- Artificial Life
- Related Blog Entries
- Other Related Background Reading
— Well, relatively recently, and with a lot of help from brain studies.
— The phrase “artificial life” is first thought to have been coined in 1986 by Christopher Langton, when he used it in the title of his paper “Studying Artificial Life With Cellular Automata.” His description of using Cellular Automata in this paper began with a discussion of life biological, but quickly advanced to use of “the molecular logic of the living state”
to produce behaviors from cell-like entities. Though his paper is grounded in the behaviors and characteristics of life-biology, his direction is clearly toward the behavioral aspects (not necessarily those behaviors that have been observed in biological cells). Perhaps because of this, the term would later come to be used as a direct synonym for artificial intelligence approaches of all types. It was, for example, used as the title of a journal on all subjects relating to artificial intelligence, including symbolic, abstract, and inference approaches, as well as biological based approaches. (see the full text of the paper
— The term, artificial intelligence (AI) was first coined in 1956 at Stanford University by professor John McCarthy.
— ELIZA is a simulation written by Joseph Weizenbaum at MIT (Massachusetts Institute of Technology) in 1966. He named it after Eliza Doolittle, the woman who learned proper English in the play “Pygmalion.” ELIZA and similar simulations are commonly known as Turing tests.