Why Artificial General Intelligence (AGI) has been so illusive

There is a simple reason why Artificial General Intelligence (AGI) has been so illusive. To date, most work on AGI has been based on a misconception: that human intelligence is produced by the brain. Yes, the brain plays a large role, but it does not act alone. It is but one part of a larger functional system: the central nervous system (CNS).

The division between brain and body is a vestige of an obsolete dualism that continues to exert a strong influence in science education. Yet the division is a false one. The confusion persists because the brain is structurally separable (a definable organ) but functionally inseparable from the CNS. The brain depends on input from the body in order to work.

As long as AGI research seeks a functional human-like intelligence, it is a fallacy to consider the brain to be a separable component. The data that contributes to general intelligence is collected and processed by the entire CNS. There are clusters of neurons outside the brain, particularly in the spinal cord, that process information and pass it on up – little mini-brain sub-processing stations, if you like. As I have earlier explained, the sub-cortical body-brain networks produce the foundational knowledge upon which all other knowledge is based.

AI has now reached the point that it can be widely adopted, and in its current form will continue to grow and flourish as a useful tool. The next frontier is AGI or “strong” AI, and this is where further innovation is needed. Human intelligence is conscious. Strong AI, if is to be truly human-like, will also need to be conscious. So when we speak of strong AI, like it or not, we are really speaking of machine consciousness.

However, long before machine consciousness is achieved, we will be able to combine numerous coordinated AIs to produce a good imitation of AGI. It will exhibit “general intelligence,” in that it will be able to perform functions across a range of spheres of intelligence. It will do so by utilizing broad sets of AIs, each dedicated to a specific function. It will be a very useful machine, but it will not seem to have human-like intelligence. It will not have the versatility of the human mind, and we will not be able to identify with it as something like us.

The ‘seat of the intellect’ may be in the brain, but the intellect is only one component of human intelligence. Artificial General Intelligence (AGI) will require including the functional equivalent of the rest of the CNS. It is this sub-cerebral CNS (body-based) processing that generates sentience. For a deeper discussion of the role of sentience in consciousness, please see my earlier posts.

The next step then, is to expand our models of the brain to include the entire nervous system. Does this increase the enormity of the task ahead? You bet it does. But modeling ourselves has been an audacious endeavor from the outset.