» What are the risks of Sophisticated Artificial Intelligence, really?

Super Smart Revolution

Play the video
Play the video
Play the video

What are the risks of Sophisticated Artificial Intelligence, really?

back

Hollywood has made movies that portray robots as uncontrollable murderous brutes. They have fed on the universal fear of new things. People have a innate fear of the unknown, and machines that think like us are even more frightening. What if … the robots want to be in charge and other questions are raised. The reality is far removed from fiction.

Although there have been advances, robots are still primitive machines that are incapable of initiative or any thinking, let alone the creative thinking that is required to mount a revolution. They are little more than programmed dolls, running programs that analyze the input and control the machine. Every set of input parameters causes specific program steps to be executed that result in a defined and controlled action.
However. a new Artificial Intelligence technology has been developed that is free from the limitations of a computer. It is not programmed like traditional computers, but learns from sensory input. This technology is based on a digital emulation of the brain’s processes enabling the machine to acquire knowledge in the same way that the brain acquires knowledge and evolves intelligence. Alan Turing is famous for the ‘Turing Test’ and the Turing machine, which carry his name. But the Turing Test is a poor reflection of the genius of this man. He was years ahead of his time when he wrote: “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain”. Creating a infant brain is exactly the direction that this new technology takes. The infant brain is then provided with innate knowledge from a library of learned ‘Training Models’. The Synthetic Neuro-Anatomy is a low-level hardware emulation of the brain. It is described in my new book ‘Higher Intelligence’ (http://higherintelligencebook.com).
It learns by recognizing pattern sets through feedback within an existing structure. Learned information is inserted in information ‘pyramids’ within a digital ‘information carrier’. consisting of a matrix of digital nodes that emulate the function of synapses and brain cells. Each learned ‘training model’ is then placed in a library of functions. At first, these functions are simple, such as to control the walk of a set of robot legs, recognition of sound patterns, recognition of speech, shape recognition, etcetera. A simple example of a functional combination of these training models provides the innate knowledge for a walking robot that has vision and understands speech. It then continues to learn like a child learns, building associations between training sets.
Learning at such a high level is unattainable with computer technology.
This presents a new challenge in evaluating the risks of A.I. We need to consider a few things. The first is ‘What is the status quo of Artificial Intelligence’, and ‘what is meant by risks’, followed by ‘what shape will Artificial intelligence take’ and ‘what is intelligence’.
The status quo in Artificial Intelligence is best illustrated by examining a machine such as Watson. Watson is a supercomputer that plays jeopardy, Its programs run on six refrigerator-sized IBM Power7 computers with 32 cores each. They analyze the question to determine what key words to use, search a large database, select the correct answer from a large collection of possible answers and finally convert the answer to synthesized speech. This is a very sophisticated search engine. When we examine the risks of such systems it is obvious that beyond the humiliation of losing to a machine, there is little to fear. Watson and its likes are not going to take over the world. The same is true for industrial and experimental robots. Programmed machines are just going to do what the programmer has defined them to do in the programs steps. Computers are machines without any intelligence that simply repeat the same program steps for the same set of inputs. However, you cannot create a movie theme out of that or it would bore everyone to death. Fear of intelligent robots is not a realistic fear because intelligent robots don’t exist at this time.

What are the rational risks of A.I. then?

There is the obvious mistake of putting too much trust in a programmed machine, the risk of losing control over the machines, the risk of unintentional damage, and the risk of incorrect data.
Programmers are human, and humans make mistakes. Program errors are quite common and the risk of program errors increases with complexity. The programmer has to define a routine to deal each possible combination of inputs. With many thousands of combinations, it is easy to miss one. Artificial Intelligence programs can be extremely complex. Program errors or omissions can cause unintended behavior or the machine becoming unresponsive. Both can result in disaster if the machine happens to be used to control the aerodynamics of a large aircraft.
The error does not have to be in the program, it can be in the data that is available to the program. In 1979 a DC10 with 257 people on board collided with Mt. Erebus at the south pole killing everyone on board because of a ‘correction’ made to the fatal flight’s routing, a data error that went undetected.
Self-driving cars, traffic light systems and aircraft that ‘fly by wire’ are all susceptible to similar problems that. Remote controlled machines, such as UAV drones and bomb disposal robots are not relevant here because they are tools that are controlled by a human operator.

A recent direction in Artificial Intelligence, also called Artificial General Intelligence continues to make the same mistakes that were made in the past. Intelligence is not the ability to measure and control. Rather it is the ability to learn, to acquire and apply knowledge and skills, to reason, to be able to deal with unknown or trying situations, and having a mental acuteness. It is tightly associated with a level of awareness and creativity. A learning machine, such as the Synthetic Neuro-Anatomy chip consists of thousands of digital nodes that simulate the function of a cortical column. The architecture of the chip relates directly to the biological model. The machine learns and evolves intelligence over time. This is a new direction in digital intelligence that must not be underestimated.
The risks of this new technology will need to be evaluated as the technology develops. This is not a ‘singularity’ or sudden emergence of intelligence, but a gradual process in which machine intelligence increases over time as more and more sophisticated training models are uploaded to libraries, and larger matrices of artificial neural cells become available. Therefore we will have time to develop rules and safeguards while we are building up both the technological level and experience.

Higher intelligence at MAICS

Posted under:

Comments

3 Comments

  1. Yoshi

    Thank you very much for your response and for purchasing my book. The technology that I am currently working on is a autonomously learning spiking neural network with dynamic synapses, modeled on the functions of the brain, I am foremost a computer scientist, and my forage into neurology was purely to design better, more realistic networks that are capable of learning at the same rate as humans do. Our neural devices are created in hardware, and do not get programmed – they learn. We have proven the technology already in a prototype chip that learns.

    We need to look at learning in a different manner. The high level learning that occurs in adults and children over 3 years of age has deep foundations. In the first months of our life we accumulate a huge amount of information. The brain starts building the mind, within a framework that was determined by DNA. DNA wires the eyes to visual processing centers of the brain, but the intricacies of vision and recognizing objects are learned. Learning is a bit like an iceberg – 95% is submerged in the unconscious or subconscious brain. All the neurons in the limbic system and brain stem have synapses, and what is learned there is completely unavailable to us, but it forms the basis and framework for conscious learning.

    If we can get our next round of funding, then we will eventually build robots that have a ‘brain’ consisting of these artificial neurons. Then it is not the design or structure of the robot, but its teaching that defines it. We build the lowest level of knowledge into the devices in the lab. We do this by exposing the artificial neurons to frequencies in the audio spectrum (to train the cochlea and low level auditory cortex) and the line fragments at different angles to train the visual cortex, etcetera.

    My expertise is not in genetics or genetic engineering. Playing with DNA and combining DNA of different species to create another type of intelligence is – in my opinion – far removed from reality. We may be able to put a gene here and there in plant materials or mice that glow in the dark, but we do not know which genes are responsible for building the brain structures that define any particular talent. The entire DNA to brain structure process is poorly understood, and depends on combinations of genes. Intelligence evolves within a predefined structure, it does not exist at birth. Intelligence = brain structure + knowledge.

    Best regards,
    Peter AJ van der Made
    BrainChip Inc.

  2. Gary Bosdijk says:

    Hello Peter, Fantastic book and thank you.
    I’m wondering on any updates on your progress with this development.
    Thank you.

Leave a Reply to Gary Bosdijk