By Peter AJ van der Made.
Author of ‘Higher Intelligence, How to Build a Functional Artificial Brain” available through bookstores and on-line at Amazon.com.
1/ Brains are analogue, computers are digital.
Von Neumann wrote in the 1950’s that neurons are digital with some analogue characteristics. Analogue information can be expressed digitally. Music on a CD and movies on DVD and Blue Ray are examples or analogue information that is stored digitally. This is the design philosophy of a new technology that is described in my book “Higher Intelligence” (http://higherintelligencebook.com) that is being launched this week. The subtitle of the book is “How To Create a Functional Artificial Brain”. This technology emulates the analogue processes of synapses and characteristics of the neural cell membrane as digital values. The design is therefore entirely digital, but it is not that a single bit represents a synapse as it does in old Neural Networks. All the processes of a biological synapse are realistically represented in 223 gates, including feedback, the synaptic cleft and neurotransmitter receptors on the neuron membrane.
The receptor register Ra,b.. is increased each time an input pulse is received, until the level is depleted. In between pulses the receptor value is decremented in a way that is characteristic of the neurotransmitter type, set in the Decline Rate register. Many receptor values from individual synapses are summed in the dendrite circuit. The sum of dendrites forms the membrane potential level, which is integrated in the neural cell body. This integrated value is compared to a variable threshold V. V depends on the state of the neuron body over time. The output of the comparator is input to a pulse shaper. Here it is determined what the output pulse or pulse train looks like. The Pulse timing signal determines the time delay between multiple input pulses and the output pulse. It also determines which pulse occurred first. The synapse level is increased or decreased depending on the pulse delay factor. This causes both STDP (Synaptic Time Dependent Plasticity – repetitions) learning and BCM (intensity) learning to occur. The values stored in collective (e.g. all the synapses connected to a single neuron) synaptic registers are representative of a particular temporal-spatial pulse pattern. The synapse levels are not affected when no output pulse occurs. Each synapse consists of 223 gates. Each neuron consists of over 6000 gates. Synapses are connected to a bus that faithfully represents the input pulse timing.
2/ The brain uses content-addressable memory.
Content-addressable memory is memory that is recalled by part of what is stored in it. This may seem ridiculous; if you already have the data, why would you want to recall it from memory? The answer is simple – to see if it has been recognized and to complete the information if some part is missing. When we get an output from a neuron it indicates that the previously learned temporal-spatial pattern has been recognized. Many neurons, organized in a cortical column, collectively respond to complex patterns that represent objects in the real world. The brain goes one step further, and that is to recall what comes next. The brain predicts the next event, whether it is a movement, a word, a note in music or a flavor and the sensory input then confirms it. Memory is stored in synapses. Inputs connect to synapses, and they are everywhere in the brain. Therefore content-addressable memory is everywhere. The content of trillions of synapses forms our mind, which is the software that makes the brain work. Much of this information is acquired through learning, but a framework is put in place by DNA when the brain forms. This framework is like a lattice of innate knowledge, a framework for learning. It works in the same way in the Synthetic Neuro-Anatomy.
3/ The brain is a massively parallel machine; computers are modular and serial.
A computer has a data bus that is 32 or 64 bits wide. The processor receives a single program instruction from memory. The address bus then selects the data word from memory that the processor is acting on. Every sequential program step acts in succession on data that is retrieved from memory which has the same width as the data bus. Even multi-core processors are sequential machines – each thread is a sequential process and there are dependencies between threads. The brain in contrast is massively parallel. Its data bus is the equivalent of millions of bits wide. It does not have an address bus since all information is addressed by content. The lines in its data bus express much more than a simple one or zero. The pulse timing, delay and intensity all have significance. There are hundreds of neurotransmitters, contained in synapses, which have different persistence values. Persistence is the rate at which the receptor value deteriorates. There are also neuromodulators that are released in the cerebrospinal fluid and affect the state of large groups of neurons.
4/ Processing speed is not fixed in the brain; there is no system clock.
Information flows through the brain unaffected by a clock signal. There are ‘timing neurons’ that generate pulse streams constantly. In the Synthetic Neuro-Anatomy there is a system clock. Its main functions are to time the rate of decline for receptors and to provide a measurement of time between input and output pulses. The integrator is not a leaky integrator, rather it receives values from many ‘leaky receptors’, which is more biologically accurate. Each receptor has a different rate of decline that is derived from the clock signal. There can be thousands of synapses. The integrator integrates the dendrite values continuously, providing a value to the axon pulse shaper.
5/ Short-term memory is not like RAM
Short term memory consists of values stored across many synapses. Like the brain, all memory in the Synthetic Neuro-Anatomy is distributed. Each node consists of a large number of synapses, a neuron integrator, a pulse shaper and glial cells. The glial cells synchronize the node and clean it up when necessary. The information is stored in synapses through a learning process, a combination of Synaptic Time Dependent Plasticity (repetitions) and BCM (intensity) learning. In this way the node learns to respond strongest to a particular temporal-spatial pattern of pulses. Sort term memory is converted into long term memory during a rest time, which we would call ‘sleep’. In a biological brain this process indexes memory in the hippocampus, which is not clearly understood at this time.
6/ No hardware/software distinction can be made with respect to the brain or mind.
I disagree. The brain consists of neurons. Each neuron has storage in thousands of synapses. That means that memory is everywhere, which contains information. This information is in a way its ‘program’ since it changes the behavior of neurons. Synaptic memory is filled with information when we learn. A baby would otherwise be born with complete knowledge. DNA puts a lattice of innate knowledge in place that forms a framework for learning. We learn from sensory input at all levels. I agree that the distinction between brain hardware and brain software has been misunderstood. When we learn, new synapses form. A synapse is a storage location. Not all the synapses are replaced, but new synapses contain new facets of existing knowledge. Synapses are updated every time they are referenced, and new information is inserted into existing synaptic memory structures. Therefore we see physical changes in the brain when we learn.
7/ Synapses are far more complex than electrical logic gates
Absolutely, from the above details it is obvious that they are far more complex. That is why we use 223 gates to emulate a single synapse. Feedback from the post-synaptic neuron triggers the learning response. We need to be rational in our approach to the complexity of the brain. Some factors are significant, while others are likely totally insignificant to the computing model and are simply an artifact of the means by which our brains form. The learning process is significant, and has been overlooked in Artificial Intelligence systems of the past. The fact that synapses are dynamic, and changed every time that they are referenced is significant. Each neuron is like a complete computer, and unlike a transistor or a binary gate.
8/ Unlike computers, processing and memory are performed by the same components in the brain.
I would not call it ‘processing’ because that implies that some program is executed, and it’s not. Each node stores a large set of parameters in its synapses that define the temporal-spatial collection of pulse trains that the node responds to. That pulse train collection set is learned by both repetition and by intensity of the stimulus. The memory is formed and is directly related to the input pulse trains. A CPU is a bad idea in dealing with neural information. It has no means of receiving thousands of simultaneous input pulses and associating them with parameters stored in memory. It forms a bottleneck since all information has to pass through it from, and to memory. The whole architecture of a ‘von Neumann’ computer, e.g. the PC as we know it, is badly suited to cognitive tasks. The brain’s architecture is completely different from a computer, it does not have program memory, no CPU and no data memory. It does not have an address bus, but a massive data bus.
9/ The brain is a self-organizing system
Self-organization is inherent in the design of the brain. A dead neuron does not affect the functioning of the whole brain. In fact, thousands of neurons are lost and replaced every day. In our emulator, faulty nodes which are due to imperfections in the silicon wafer during manufacturing, are simply ignored. Nodes that fail are ignored and bypassed.
10/ Brains have bodies.
Yes, and bodies contain many neural cells and feedback paths to the CNS, as well as many sensory organs. If we need to build artificial humans then we will have to give the synthetic brain a body with the same neural feedback mechanisms. That does not mean that we cannot use smaller synthetic ‘brains’ to do useful work, such as in the recognition of visual images, audio streams, context recognition, limb control, etc. All these functions are learned and then stored in a function library for reuse in new devices. Eventually we will have enough functions in our training model library to build robots that have bodies.
11/ Bonus: The brain is much, much bigger that any current computer.
The brain contains 8.7 * 1010 interconnected neural cores. We know that the brain functions in a completely different manner of a computer, which is why we designed the Synthetic Neuro-Anatomy in the first place to complement the computer’s logic and arithmetic functions. With 10,000 nodes per chip that are currently feasible in standard silicon manufacturing techniques, it would take 8.7 million chips to build a human brain. If we were to use wafer scale integration, each wafer could contain 50 million nodes. It would take 1740 wafers to build the human brain, which is more manageable. Power consumption per node is extremely low because the nodes are not clocked and only consume power when they switch. The timing clock signal runs at a low 17 KHz. At 100 mW per device the entire artificial human brain would consume less than 800 KW. This is still a lot more than the 20 Watts that the brain consumes.
It is likely that there is complexity in the brain which is not a requirement of its function. Bird flight is complex, with tendons and muscles that have to act in the right way to give the bird lift. We do not copy that complexity into a 747 to enable it to fly. In the same way, the location of a synapse on the dendric spine may not be all that significant. The question is, will we need to build a human brain in exactly the configuration of a biological brain? When we start training small modules for specific tasks, and putting these tasks together to build interactive, learning robots we may find that smaller synthetic brains will suffice. Intelligence is not a function of brain size or an elephant with ~2 * 1011 neurons would be far smarter than a human being. A whale has a brain that is five times the size of a human brain. Alex the parrot accomplished more complex tasks with only 40 grams of brains than a monkey with 400 grams. Intelligence is a function of brain structure and neuron densities in specific modules, not size.
More information is available in my book “Higher Intelligence” that is available from this site.