Two kinds of conscious machines have been popular in science fiction; conscious A.I, and conscious robots. Conscious A.I., essentially consciousness-in-a-box, is almost always portrayed as dangerous, as indeed it would be, if such a thing were possible. Conscious robots are generally portrayed as being just like us, and like us, can be either good or bad.
In the vast majority of science fiction stories about conscious A.I., the machine consciousness happens by some mysterious accident, unplanned and unexpected. In stories about robots, machine consciousness is either just accepted as a given, or again just somehow spontaneously happens. Both of these portrayals of machine consciousness are pure fantasy, literary devices needed when human consciousness was still a complete mystery.
In The science behind The Mechanism Me, I will try to explain how machine consciousness is indeed plausible, and why it will never happen by accident. While we certainly don’t yet have all the answers, we now know enough about human consciousness to know that there are specific requirements that must be met before it can emerge.
Even this rudimentary knowledge allows us to debunk many common beliefs, like the one that holds that if you feed enough data about the world into an A.I., at a certain point consciousness will spontaneously appear. The evidence to date concurs with the neuroscience: it doesn’t matter how many yottabytes of data you feed into an A.I., it will still be no more conscious than your laptop.
Science allows us to confidently dismiss another common theme in fiction: the idea that machine consciousness will be created by some mad scientist in his lab. On the contrary, it will require an extraordinary team effort, akin to the Human Brain Project (https://www.humanbrainproject.eu/), a ten year effort, currently underway, involving hundreds of researchers from over twenty countries. Android development will benefit directly from the fruits of this huge effort.
Before we can have an intelligent conversation about machine consciousness, we have to come to some agreement about the definition of terms. The field of A.I. has for decades been rife with semantic confusion, and the same will happen with machine consciousness research if no consensus can be reached. I am just laying out the problem, not claiming to have the answers.
The problem is that there is no agreed upon definition of intelligence, and no consensus on what consciousness even is. Both have been defined in many different ways and subdivided into many different aspects. Many, many passionate arguments have arisen between factions using the same word but unwittingly talking about different things.
To simplify things at the start, I will clarify that when I speak of consciousness, I am speaking of human-like consciousness, consciousness that we can all identify with, the consciousness of our shared experience. For our present purposes, cosmic or universal or animal or any other kind of “consciousness” is not what we are talking about. With regard to machine consciousness, anything other than human-like consciousness would be impossible for us to relate to or communicate with, or to share any kind of kinship with. In a machine, such alien consciousness, were it possible, would be truly dangerous.
No, what we need in conscious machines are ones we can relate to; machines that can in turn relate to us. As I will later show, human-like consciousness requires a human-like body. When we are speaking of machine consciousness, then, we are really talking about sentient androids. In future entries, I will explain why consciousness, in any recognizable form, can not exist in a box, and I will discuss what “sentience” means in the context of robotics.