The Science Behind In Synthient Skin

I know what you’re thinking. (Of course I really don’t, but let’s not burst that bubble just now.)

You think I’m talking about the science of gears and wires and stuff, you know, what robots are made from. Sorry, I know diddly about all that. No, I’m talking about the newest star in the science world, brain science.

It’s not rocket science. No, it’s profoundly harder. It’s deep science. Rocket science is actually a branch of aerospace engineering, and all rocket science problems can be solved with math. It’s no biggie. (Okay, so if you don’t know any math, it’s going to be hard.)

My point here, is that rocket science is grade-school stuff compared to brain science. And before you go getting all starry-eyed, no, I know diddly about brain science, too. So why and I talking about it?

Because maybe you’re one of the unique few who finds this stuff interesting. I know I sure do. Personally, I’ve always felt a sense of awe when learning about how my brain works. In an intimate sense, brain science is about our very selves and how we work; it’s about you, about me.

How does your brain make you you? How does it generate the experience that you live every moment? The questions are so unfathomable that most dare not dwell on them. For many, the questions would never occur to them. Our experience is just what it is. Few wonder why. Fewer still wonder how.

For brain scientists, these questions are their bread and butter. (Sorry, that’s an elder expression, meaning their basic diet, their main focus, their – I don’t know, how would you say it?)

I’m talking brain science because of robots! Robots need AI if they are to function autonomously, and current AI is woefully inadequate. So how do we build an AI that is functionally equivalent to the human brain?

We’re clever monkeys. We imitate. The most successful intelligent machine we know of is the human brain. (Biomechanism, I should have said, not machine. You caught that, right?) All we have to do is imitate that.

And therein lies the problem. If only it were just rocket science.

But we’re making progress. Neuroscience and AI go hand in hand. Each supports and benefits from the other. We learn about our own brains through our attempts to imitate them. The more we learn, the better our imitations become. If we continue down this path, it is inevitable that we will eventually succeed. What happens then?

Let the speculation begin!

Oh, it’s already begun? In Book 1 of the Guardian Android series, you mean? In Synthient Skin? Okay, I see your point.

But I’d love to hear what you wonder about your own brain, wondrous as it is. What do you notice about your inner experience that fascinates you? What mysteries do you think are unsolvable?

Time for Robot Fiction to go hard

Hello and Welcome. This blog is about robot technology, present and near future. If you love robots, please join the conversation.

Robots have been a mainstay of science fiction since its early days, and now they are here. Today’s robots, however, are nothing like the robots of fiction. It turns out that building functional humanoid robots — androids — is unimaginably hard. Even in a world churning with mind-boggling new technologies, humanoid robots are still in a rudimentary stage of development. This blog is an exploration of the technologies required to bring advanced androids to life. At the very least, this will include the mechanical technologies and advanced materials that will go into chassis design, and the A.I. and neuromorphic/neuromimetic computing technologies that will control those chassis.  I will also explore some of the ethical questions we need to be considering before we actually succeed, and the social implications of success.

Science fiction has always been a powerful way to explore the landscapes of possible futures. Many people equate “hard sci-fi” with dry and heavy technical descriptions, and unless they loved science class in school, expect not to like it. But “hard sci-fi” need not be dry. In fact, the whole notion of “hard science” has lost its meaning in recent years. At its core, all science is fuzzy. The traditionally hard sciences like physics, chemistry and math, have all come to accept the fundamental uncertainty that permeates existence. The traditionally “soft” sciences, on the other hand, like psychology and the social sciences, now employ rigorous scientific methods to investigate areas once thought to be unquantifiable, such as consciousness, happiness and love. Yes, there really is a serious science of love. As a result of these changes, “hard” science fiction now encompasses the full range of human experience.

Hard sci-fi can best be appreciated as a contrast to “fantasy” simply because it presents worlds and ideas that are scientifically plausible, unlike, say, dragons, sorcerers, vampires and zombies. Hard and soft science fiction , on the other hand, are not discrete things, but rather exist on a continuum, ranging from hard to soft. The more scientifically plausible the technology, the “harder” the science. Teleportation, faster-than-light travel and time travel are all, for now, implausible, and introduce a fantasy element into a story, softening it. The same is true, historically, of androids that are indistinguishable from humans. My upcoming novel, In Synthient Skin, pushes android consciousness into the hard end of the spectrum, while remaining a deeply human story.

While much of science fiction takes us to distant worlds and times, it is the near future that most interests some of us. We can not predict what the future will be like, but we can certainly have fun speculating. Fifty years from now we will not have dragons, but barring a catastrophe, we will most certainly have robots. They won’t magically or mysteriously be just like us. They will have to be painstakingly designed and constructed to be that way, and a tremendous amount of work is being done to give us a realistic blueprint for success. It is this work that will be our focus here.

But do we really want that success? This is one of the traditional questions that science-fiction has helped us explore. I am currently working on a robot trilogy that brings this question into modern focus, something that becomes increasingly urgent as technology surges ahead. There is still time to shape the future. Please contribute by adding your voice.

If you are interested in these topics, I suggest reading these blog entries in the order in which they were written, as they build on each other.

A tool-building species.

A clinical psychologist, I have been studying people academically, professionally and personally for over 40 years. What I have learned over all that time is how little I know.

You, a human being, are simply the most complex thing in the known universe. Your brain alone can hold that distinction, but treating your brain as separate from the rest of you is misleading. Your brain is an integral and inseparable part of the larger open system that is you, and when all connected components and subsystems are included in the analysis, you are far more complex still than just your brain. Get a whole bunch of people together, all open, interacting systems, and the complexity raises exponentially.

All of that complexity makes us a truly exceptional species. No other species studies itself the way we do. No other species could even aspire to building a working replica of itself. Indeed, given our complexity, the goal of building a human-like android is audacious in the extreme.

We set out on this quest to build replicas of ourselves long before we had any idea how difficult it would actually be. The relentless march of science, providing us with an ever expanding knowledge base, has enlightened us in this regard. The more we know, the more we see of how much we don’t know.

Yet that doesn’t stop us from pushing to expand the boundaries of our knowledge. On the contrary. Science has moved into numerous areas once thought to be beyond its purview, and whole new areas of study are continuously opening up. It is a daunting task just to list all the disciplines that will need to be drawn from in order to create a truly human-like machine.

But why? Why would we want to do that? Surely there are enough humans already, without trying to build more. Surely we have all heard the dire warnings, the disastrous consequences foreseen by generations of science fiction writers. Why even try to build intelligent machines?

Why build machines at all? It turns out we don’t have much choice. We are a tool-making species, and machines are our most powerful tools. We use tools to extend our capabilities. It was our tools that enabled us to carve out a dominant place in the natural world. We build machines to enable us to do things and go places that would otherwise be beyond us.

Early machines gave us physical power, enhancing our strength and speed. More recent machines give us intellectual power, augmenting our memories and calculating abilities. Essentially, we build machines to help us, and being as vulnerable and limited as we are, in a vast, indifferent universe we need all the help we can get.

When we need help, as social animals we instinctively want to turn to each other. Unfortunately, as individuals, we tend to be rather unreliable. Sometimes we are able to get what we need from each other, sometimes not. Machines reduce our reliance on each other.

We build machines to help us, but there are so many things we need help with that we need to surround ourselves with machines. It would require a machine of extraordinary versatility to improve on this, and there are no such machines. In fact, the most versatile and adaptive thing on earth is the human being, so if we want to build something truly helpful, it makes sense to use us as a model.

Are there risks in building such machines? Definitely. Which is why now is a good time to start figuring out how to get it right.

Machine consciousness

Two kinds of conscious machines have been popular in science fiction; conscious A.I, and conscious robots. Conscious A.I., essentially consciousness-in-a-box, is almost always portrayed as dangerous, as indeed it would be, if such a thing were possible. Conscious robots are generally portrayed as being just like us, and like us, can be either good or bad.

In the vast majority of science fiction stories about conscious A.I., the machine consciousness happens by some mysterious accident, unplanned and unexpected. In stories about robots, machine consciousness is either just accepted as a given, or again just somehow spontaneously happens. Both of these portrayals of machine consciousness are pure fantasy, literary devices needed when human consciousness was still a complete mystery.

In The science behind In Synthient Skin, I will try to explain how machine consciousness is indeed plausible, and why it will never happen by accident. While we certainly don’t yet have all the answers, we now know enough about human consciousness to know that there are specific requirements that must be met before it can emerge.

Even this rudimentary knowledge allows us to debunk many common beliefs, like the one that holds that if you feed enough data about the world into an A.I., at a certain point consciousness will spontaneously appear. The evidence to date concurs with the neuroscience: it doesn’t matter how many yottabytes of data you feed into an A.I., it will still be no more conscious than your laptop.

Science allows us to confidently dismiss another common theme in fiction: the idea that machine consciousness will be created by some mad scientist in his lab. On the contrary, it will require an extraordinary team effort, akin to the Human Brain Project (https://www.humanbrainproject.eu/), a ten year effort, currently underway, involving hundreds of researchers from over twenty countries. Android development will benefit directly from the fruits of this huge effort.

Before we can have an intelligent conversation about machine consciousness, we have to come to some agreement about the definition of terms. The field of A.I. has for decades been rife with semantic confusion, and the same will happen with machine consciousness research if no consensus can be reached. I am just laying out the problem, not claiming to have the answers.

The problem is that there is no agreed upon definition of intelligence, and no consensus on what consciousness even is. Both have been defined in many different ways and subdivided into many different aspects. Many, many passionate arguments have arisen between factions using the same word but unwittingly talking about different things.

To simplify things at the start, I will clarify that when I speak of consciousness, I am speaking of human-like consciousness, consciousness that we can all identify with, the consciousness of our shared experience. For our present purposes, cosmic or universal or animal or any other kind of “consciousness” is not what we are talking about. With regard to machine consciousness, anything other than human-like consciousness would be impossible for us to relate to or communicate with, or to share any kind of kinship with. In a machine, such alien consciousness, were it possible, would be truly dangerous.

No, what we need in conscious machines are ones we can relate to; machines that can in turn relate to us. As I will later show, human-like consciousness requires a human-like body. When we are speaking of machine consciousness, then, we are really talking about sentient androids. In future entries, I will explain why consciousness, in any recognizable form, can not exist in a box, and I will discuss what “sentience” means in the context of robotics.

Intelligence vs. Sentience

The idea of intelligent machines has been around at least since the time of Turing’s work in the 1940s. Artificial intelligence (A.I.) has become a mature field, and is now poised on the brink of widespread application across all existing technologies. Despite this, early hopes were never realized, and today’s A.I. is not what it was once expected to be.

It became apparent early on that A.I., while being far superior to humans in doing certain specific things, could not come close to the general breadth of human intelligence. The goal of creating artificial general intelligence, or “strong A.I.,” has proven to be surprisingly difficult.

In my last blog I identified the difficulty in even defining what “intelligence” is. In fact, so many different definitions have been proposed that it is now accepted that there are many distinct facets of what could be called intelligence. ( for a small sampling, see http://en.wikipedia.org/wiki/Human_intelligence )

We can hope to replicate the functions of many of these individual facets of intelligence, as we already do with decision making, numerical analysis and the other current A.I. applications. Even then, the result will be a number is individual A.I.s working in parallel; useful, certainly, but not the strong A.I. that was the original goal. Something will still be missing, and that something is sentience. The “general” nature of human intelligence arises from a foundation of sentience.

Sentience

Sentience, from the Latin “sentiēns” (feeling, perceiving), is the ability to feel, sense or experience perceptions subjectively. Sentience represents a distinct knowledge system, one that is distributed throughout the body, separate from but integrated with brain-based intelligence. Our bodies know things that we may not even be conscious of. They use this knowledge for self-maintenance and repair, for self-protection, and for many other autonomous functions. No conscious involvement is required from us. Arguably, all living creatures have some degree of sentience, as it is essential to our ability to survive and thrive.

Human sentience is complex and nuanced: through consciousness, we experience our own embodiment – we “feel” ourselves – while at the same time perceiving the world around us and our interaction with it. This body-based knowledge operates outside the realm of logic, reason and language, often even outside of our awareness, and so falls outside the traditional scope of A.I.

If we want to have a truly human-like android, we have to go beyond the limitations of machine intelligence and add machine sentience. The distinction between intelligence and sentience is crucial to understand, but in practice, they are interdependent qualities. Human-like sentience will require intelligence, and true general intelligence will require sentience.

Sentience is fundamental to human consciousness.

The foundation of human consciousness, as eloquently described by Antonio Damasio in his classic, The Feeling of What Happens, is the moment to moment orchestra of sensation arising from the biophysical activity of the body. This constant stream of data provides continuous feedback to the body’s maintenance systems, enabling the autonomic responses necessary to maintain homoeostasis, nutrient balances, and waste management. It also provides feedback as to body position and orientation, and critical information about the local environment.

The brain integrates this flow of data into a coherent model of the current bio-state of the body as a whole, and this model is what we experience as our “self.” As we experience the ongoing updating of this model, our conscious experience consists of the perception of changes occurring in the model. We have no awareness of the model per se, but we do perceive changes as they occur, and this ongoing process of change is the content of our present experience.

In order for this process to work, we need sensory and feedback data from critical functional systems, we need to compile the data from all sources into a single, coherent representational model, we need a memory system to make temporal comparisons and note changes as they occur, and we need a meta-level compiler to integrate those changes into patterns of neural stimulation that are the stuff of experience.

This is not the entirety of human consciousness, but only the prerequisite, a base level that Damasio calls “core consciousness.” I will discuss the other levels in future blogs.

Sentience is fundamental to human consciousness and human intelligence. Unless an android is sentient, it can not be conscious in the human sense, no matter how much “intelligence” is packed into it.

In my next blog I will further explain how body-based knowledge lays a critical foundation for brain-based knowledge.

Whole Body Intelligence

The Mechanism Me, as should be obvious from the title, pays homage to Asimov’s classic I Robot series. It is my respectful attempt to update the venerable “positronic brain” with a robotic system that reflects current trends in science and technology.

Asimov’s prodigious foresight in envisioning a “positronic brain” anticipated the field of computational neuroscience by half a century. Only now can we start to fill in some of the details on how it might actually work.

There is a tremendous amount of work going on today with the goal of building an artificial brain, i.e., The Human Brain Project. A lot of the previous work in this area seems to have overlooked a key understanding coming out of the neurosciences: The human brain is not an isolated organ. Intelligence, sentience and consciousness are all the product of an integrated human knowledge system that includes the whole body. Building an artificial brain will not lead to something that thinks like us. An artificial brain needs to be housed in an artificial body before anything like human intelligence, sentience or consciousness can emerge.

Human-like artificial general intelligence requires the sentience that can only be provided by a body. Developmental neuropsychology tells us that the neurological substrates of logic and reason, primarily located in the cerebral cortext, are not yet functional in the first few years of life. Despite this, infants and toddlers learn at a prodigious rate. If their intellects are not even working yet, how are they learning? They are learning with their entire nervous systems, not just their brains. Bodies have their own separate memory systems. What are they learning? They are learning the foundational knowledge upon which all other knowledge will later be based.

How do we know what we know?

From day one, we learn from experience. Specifically, we learn from feeling our bodies as they experience their environments. Through repeated and ongoing exposure to stimuli our bodies learn associations, that is, what goes with what. Some forms of stimulation are associated with good feelings, some with bad. The parts of our knowledge system that control our bodies, our body-brains, are designed to learn to make movements that produce desired effects (comfort, pleasure) and inhibit those that produce undesired effects (discomfort, pain).

As infants, our limbs randomly move about and encounter things. We discover that some things move when touched and some don’t, some things are graspable and some aren’t. The things that move can be within our reach or move out of it. Through efforts to reach, we learn to distinguish between near and far, the core concept of the near-far dichotomy. As we explore whatever comes within reach, we learn that things have differing qualities, requiring different sorts of interactions. We learn distinctions, the easiest always involving opposite poles of a dichotomy. Some things feel warm and some cold. Soft things are safe against our skin but hard things can hurt. We learn the concept of the soft-hard dichotomy. Some things we can lift or move, others we can’t: The light-heavy dichotomy.

We gradually figure out that through our own muscle movements we can change our orientation to the world around us. We can roll over, push our eyes away from the ground and gain a higher perspective. Higher still and we’re sitting up, but at constant risk of falling down again. Now we know up-down. All of this learning is done by the subcortical regions of our nervous systems, as our cerebral cortex and hippocampal memory systems are not yet functional. In other word, our foundational knowledge is not the conscious knowledge of the reasoning brain, but rather the sensory, emotional and procedural knowledge acquired by the deeper body-brain regions through felt experience.

Through our body’s experience of its environment we learn all the core dichotomies that form the foundation for all the knowledge we will ever possess. Hunger-satiation, pleasure-pain, hot-cold, each dichotomy creates the raw material for analogy. Something can be not physically hot but have a quality that evokes the experience of hot, and we understand what that means. Something with no mass can be described as “heavy” and the description makes sense because we know what heavy feels like. Over time our dichotomies become more sophisticated and complex and form the basis for increasingly abstract analogies. Something new must be like something we already know, grounded in primal sensory experience, before we can make any sense of it. This is the contribution of the body-brain to human knowledge, and without this body-based knowledge, no machine can ever know the world as a human does.

It used to be thought that if a database of useful information about the world was extensive enough, it would enable a computer to understand things. There is no magical threshold of complexity at which consciousness spontaneously arises. We now know that there are specific conditions that must be met. Consciousness is both embodied and relational. Self can only be known from the experience of being-in-the-world provided by a body, and self can only be recognized in relation to other. Similar bodies produce similar experiences, and so create the ground for a common experience of consciousness, a commonality that enables us to know ourselves by reflection and to relate to each other as beings. No matter how extensive the database, all a computer can do is regurgitate the data. It can’t understand the data in the human sense. Artificial Intelligence, as originally conceived, can never acquire consciousness.

Strong A.I. and Sentience

Artificial General intelligence, or strong A.I., will require sentience, which requires a sensor-rich body and a brain. The body is needed to accumulate and feed real-time sensory data to the brain. Attaching sensors to robots is as old a robotics, but human-like sentience requires and enormous amount of sensory data. A lot of good work is going on in the field of machine sensing, at least with the primary senses.

To complete the system, the brain must be able to process sensory data into meaningful perceptions that can be remembered, compared and contrasted; things that A.I. happens to do very well. In order for an android to be intelligent in a way that we can relate to, it must have a fully integrated brain-body system. Like us.

Next: biomimetics.

Biomimetics and Neuromimetics

The more we learn about how we humans work, the more able we are to use biomimetics to leap forward with our technology. Biomimetics is the mimicking of biology. Many of our most promising new technologies borrow from the remarkable solutions arrived at by nature over millions of years of evolution. Examples of biomimetic products include velcro, aerogels, superhydrophobic surfaces, efficient wing shapes and solar cells, ultra-strong materials, and many, many more, with a multitude of new projects going all in labs all over the world, all seeking to imitate some extraordinary feature found in nature.

Robot design will benefit from biomimetics in numerous ways. Androids, by definition, are imitations of the human body plan; that is, humanoid. But the imitation can and will go much deeper. The human skeletal system, with its attached muscles and tendons, differs in significant ways from the typical chassis frame of current androids, and this difference is reflected on the functional level. It turns out that human locomotion is remarkably efficient, and so one approach to making android locomotion more efficient will be to mimic the human gait, which will require a similar musculoskeletal construction.

Nature, through natural selection, arrives at solutions that are sufficient to enable the organism to survive and thrive in the face of whatever challenges are present in the local environment. It is important to note that nature arrives at sufficient solutions, which, while often far better than humans can think up on their own, are not necessarily optimal solutions. Again, a sufficient solution is not an optimal solution. This means that biomimetics, in the hands of competent researchers and engineers, offers an initial leap forward, but not necessarily an end point. Once we figure out how nature accomplishes a task, we may be capable of going on to improve it even further.

Take the human spine, for example. It is a mind boggling structure, from an engineering perspective, breathtaking in its complexity, flexibility, strength and durability. It enables humans to stand erect, for extended periods, carry heavy weights, move and flex and tumble and roll and support the movements of all limbs. And yet it is far from perfect, as evidenced by the number of back problems people experience. As we develop future android chassis, we will have the benefit of using what works well, and trying to improve upon it.

The androids of the near future will not look mechanical, like the robots of today, with steel rods, gears and wires, metal covering plates held on with bolts and screws. All of these industrial age materials will be obsolete within the next 10 – 20 years, replaced by new, much lighter and stronger materials, molded or printed into the desired shapes. Many of there materials are already in development, inspired by biomimetics.

Android chassis will not be the only things to benefit from biomimetics. So will their brains. One specialized area of biomimetics is neuromimetics.

Neuromimetics is the mimicking of the nervous system, including the brain. While this does, of course, requires a solid understanding of how the system works, it also helps us to further refine our models. If our emulations don’t work the way we expect, we can go back to our models to figure out where we’ve gone wrong and try again.

There are actually two branches of neuromimetics; medical and technological. It is technological neuromimetics to which I refer when I use the term, that is, building technology that mimics the workings of the nervous system and brain. The use of neuromimetics is still in its infancy. A current example of this is seen in neuromorphic chips; microprocessors configured to resemble the wiring of the brain, rather than that of traditional circuits. (See The MIT Technology Review – http://www.technologyreview.com/featuredstory/526506/neuromorphic-chips/)

It will be exciting to see where this work leads in the next decades, as its fruits are combined with those of the Human Brain Project (https://www.humanbrainproject.eu/) and other efforts. Computers can be expected to become much more capable and efficient. But will they be able to think?

Let’s explore that question next time.

Can machines think?

Here is a riddle. I am invisible and obvious at the same time. I seem so substantial that I occupy most of your waking attention yet I am completely insubstantial. I carry the apparent weight of critical importance while being largely irrelevant. I constantly blind-side you from out of nowhere but am under your control. I am natural yet completely disregard the laws of nature. What am I?

I’m your thoughts, of course. Thinking is so subjectively invisible that few people ever notice they are doing it. When attention is focused on it, it is so subjectively obvious that few stop to wonder about it. So, can a machine think? That depends on how you define the word “think”, which turns out to be a lot harder than you might think.

What is thinking? What does it mean to think?

These are questions only a psychologist could love. Don’t ask me why, but psychologists tend to love hard questions. But I won’t bore you with the long list of cognitive activities that are commonly referred to as thinking. As A.I. research pushes against its limits, researchers are increasingly turning to the cognitive sciences for answers. And the short answer is, no, machines can not think. At least not yet. (Eventually they will.) When they are computing, they perform a limited array of cognitive tasks, and can do so very well. Some can even learn. But they are not thinking in the way that humans think.

Computers run instruction sets composed of binary code, which produce more binary code instruction sets. Human thought consists largely of mental images (cognitive maps and models), which include symbols, words and sounds as well as pictures, all tied to previous experience.

Computers remember by storing bits and recording their location for later access. They are not reminded of things, nor do they try to recall things. Humans have at least three distinct memory systems, each working out of different brain subsystems. We can “think” about the past, but we also remember lots of things without thinking about them, such as how to ride a bike.

Machines are getting pretty good at imitating some of our mental activities. In fact, I consider the Turing Test to be obsolete. There are apparently many computers that can now pass it. They do so with the help of some very good programming that enables them to imitate people quite well. It turns out we are not that hard to imitate in casual conversation. We tend to be quite mechanical with each other, speaking in cliches and well worn scripts. But just because a machine can simulate a conversation does not mean that it can think. The process that generates the words has nothing in common with how we generate speech.

Machines can now perform an impressive array of cognitive tasks. If you want to call what machines do “thinking”, then feel free. But it is a qualitatively different thing from what goes on in the human mind. It is not “human-like” thinking. Strong AI will require that ability to think in a human-like way. As you know from my earlier posts, I believe that sentience is the long-overlooked key to success. If I am right, the first thinking machine will not be a computer. It will be an android. Why? Because sentience is an embodied process. It will be far easier to build sentience into an android than to simulate sentience in a box.