Hollywood has made movies that portray robots as uncontrollable murderous brutes. They have fed on the universal fear of new things. People have a innate fear of the unknown, and machines that think like us are even more frightening. What if … the robots want to be in charge and other questions are raised. The reality is far removed from fiction.
Although there have been advances, robots are still primitive machines that are incapable of initiative or any thinking, let alone the creative thinking that is required to mount a revolution. They are little more than programmed dolls, running programs that analyze the input and control the machine. Every set of input parameters causes specific program steps to be executed that result in a defined and controlled action.
However. a new Artificial Intelligence technology has been developed that is free from the limitations of a computer. It is not programmed like traditional computers, but learns from sensory input. This technology is based on a digital emulation of the brain’s processes enabling the machine to acquire knowledge in the same way that the brain acquires knowledge and evolves intelligence. Alan Turing is famous for the ‘Turing Test’ and the Turing machine, which carry his name. But the Turing Test is a poor reflection of the genius of this man. He was years ahead of his time when he wrote: “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain”. Creating a infant brain is exactly the direction that this new technology takes. The infant brain is then provided with innate knowledge from a library of learned ‘Training Models’. The Synthetic Neuro-Anatomy is a low-level hardware emulation of the brain. It is described in my new book ‘Higher Intelligence’ (http://higherintelligencebook.com).
It learns by recognizing pattern sets through feedback within an existing structure. Learned information is inserted in information ‘pyramids’ within a digital ‘information carrier’. consisting of a matrix of digital nodes that emulate the function of synapses and brain cells. Each learned ‘training model’ is then placed in a library of functions. At first, these functions are simple, such as to control the walk of a set of robot legs, recognition of sound patterns, recognition of speech, shape recognition, etcetera. A simple example of a functional combination of these training models provides the innate knowledge for a walking robot that has vision and understands speech. It then continues to learn like a child learns, building associations between training sets.
Learning at such a high level is unattainable with computer technology.
This presents a new challenge in evaluating the risks of A.I. We need to consider a few things. The first is ‘What is the status quo of Artificial Intelligence’, and ‘what is meant by risks’, followed by ‘what shape will Artificial intelligence take’ and ‘what is intelligence’.
The status quo in Artificial Intelligence is best illustrated by examining a machine such as Watson. Watson is a supercomputer that plays jeopardy, Its programs run on six refrigerator-sized IBM Power7 computers with 32 cores each. They analyze the question to determine what key words to use, search a large database, select the correct answer from a large collection of possible answers and finally convert the answer to synthesized speech. This is a very sophisticated search engine. When we examine the risks of such systems it is obvious that beyond the humiliation of losing to a machine, there is little to fear. Watson and its likes are not going to take over the world. The same is true for industrial and experimental robots. Programmed machines are just going to do what the programmer has defined them to do in the programs steps. Computers are machines without any intelligence that simply repeat the same program steps for the same set of inputs. However, you cannot create a movie theme out of that or it would bore everyone to death. Fear of intelligent robots is not a realistic fear because intelligent robots don’t exist at this time.
What are the rational risks of A.I. then?
There is the obvious mistake of putting too much trust in a programmed machine, the risk of losing control over the machines, the risk of unintentional damage, and the risk of incorrect data.
Programmers are human, and humans make mistakes. Program errors are quite common and the risk of program errors increases with complexity. The programmer has to define a routine to deal each possible combination of inputs. With many thousands of combinations, it is easy to miss one. Artificial Intelligence programs can be extremely complex. Program errors or omissions can cause unintended behavior or the machine becoming unresponsive. Both can result in disaster if the machine happens to be used to control the aerodynamics of a large aircraft.
The error does not have to be in the program, it can be in the data that is available to the program. In 1979 a DC10 with 257 people on board collided with Mt. Erebus at the south pole killing everyone on board because of a ‘correction’ made to the fatal flight’s routing, a data error that went undetected.
Self-driving cars, traffic light systems and aircraft that ‘fly by wire’ are all susceptible to similar problems that. Remote controlled machines, such as UAV drones and bomb disposal robots are not relevant here because they are tools that are controlled by a human operator.
A recent direction in Artificial Intelligence, also called Artificial General Intelligence continues to make the same mistakes that were made in the past. Intelligence is not the ability to measure and control. Rather it is the ability to learn, to acquire and apply knowledge and skills, to reason, to be able to deal with unknown or trying situations, and having a mental acuteness. It is tightly associated with a level of awareness and creativity. A learning machine, such as the Synthetic Neuro-Anatomy chip consists of thousands of digital nodes that simulate the function of a cortical column. The architecture of the chip relates directly to the biological model. The machine learns and evolves intelligence over time. This is a new direction in digital intelligence that must not be underestimated.
The risks of this new technology will need to be evaluated as the technology develops. This is not a ‘singularity’ or sudden emergence of intelligence, but a gradual process in which machine intelligence increases over time as more and more sophisticated training models are uploaded to libraries, and larger matrices of artificial neural cells become available. Therefore we will have time to develop rules and safeguards while we are building up both the technological level and experience.