home | about us | products | news | jobs | investor relations | publications | contact us Thursday, February 15, 2001



Matter will become sofware
Nano Soft Reports

An artificial brain Using Evolvable Hardware Techniques to Build a 75 Million Neuron Artificial Brain to Control the Many Behaviors of a Kitten Robot

By Hugo de Garis, Michael Korkin

It is appropriate, in this first year of the new millennium, that a radical new technology makes its debut, that will allow humanity to build artificial brains, an enterprise which may define and color the 21st century. This technology is called "Evolvable Hardware" (or just "E-Hard" for short). Evolvable hardware applies genetic algorithms (simulated Darwinian evolution) to the generation of programmable logic devices (PLDs, programmable hardware), allowing electronic circuits to be evolved at electronic speeds and at complexity levels that are beyond the intellectual design limits of human electronic engineers. Tens of thousands (and higher magnitudes) of such evolved circuits can be combined to form humanly specified artificial brain architectures.

In the late 1980s, I (de Garis) began playing with genetic algorithms and their application to the evolution of neural networks. A genetic algorithm simulates the evolution of a system using a Darwinian "survival of the fittest" strategy. There are many variations of genetic (evolutionary) algorithms. One of the simplest uses a population of bitstrings (a string of 0s and 1s) called "chromosomes" (analogous to molecular biology) to code for solutions to a problem. Each bitstring chromosome can be decoded and applied to the problem at hand. The quality of the solution specified by the chromosome is measured and given a numerical score, called it "fitness". Each member of the population of competing chromosomes is ranked according to its fitness. Low scoring chromosomes are eliminated. High scoring chromosomes have copies made of them (their "children" in the next "generation"). Hence only the fittest survive. Random changes are made to the children, called "mutations". In most cases, mutations cause the fitness of a mutated chromosome to decrease, but occasionally, the fitness increases, making the child chromosome fitter than its parent (or parents, if two parents combine bits "sexually" to produce the child's chromosome). This fitter child chromosome will eventually force its less fit parents out of the population in future generations, until it in turn is forced out by its fitter offspring or the fitter offspring of other parents. After hundreds of generations of this "test, select, copy, mutate" cycle, systems can be evolved quite successfully that perform according to the desired fitness specification.

Neural networks are interconnected nets of simulated brain cells. An individual simulated brain cell (neuron) receives signals from neighboring neurons, which it "weights" by multiplying the incoming signal strength Si by a numerical weighting factor Wi, to form the product Si*Wi. The sum of all the incoming weighted signals is formed and compared to the neuron's numerical threshold value T. If the sum has a greater value than T, then the neuron will "fire" an output signal whose strength depends on how much greater the sum is than the threshold T. The output signal travels down the neuron's outward branching pathway called an "axon". The branching axon connects and transmits it signal to other branching pathways called "dendrites" which transmit the signal to other neurons. By adjusting the weighting factors and by connecting up the network in appropriate ways, neural networks can be built which map input signals to output signals in desired ways.

The first attempts to wed genetic algoriths (GAs) to neural nets (NNs) restricted themselves to static (constant valued) inputs and outputs (no dynamics). This restriction struck me as being unwarranted, so I began experimenting with dynamic inputs and outputs. The first successful attempt in this regard managed to get a pair of stick legs to walk, the first evolved, neural net controlled, dynamic behavior. If one can evolve one behavior, one can evolve many, so it became conceivable to imagine a whole library of evolved behaviors, for example, to get a software simulated quadruped to walk straight, to turn left, to turn right, to peck at food, to mate, etc, with one separately evolved neural net circuit or module per behavior. Behaviors could be switched smoothly by feeding in the outputs of the module generating the earlier behavior to the inputs of the module generating the later behavior. By evolving modules that could detect signals coming from the environment, e.g. signal strength detectors, frequency detectors, motion detectors etc, then behaviors could be changed at appropriate moments. The simulated quadruped ("Lizzy") could begin to show signs of intelligence, due to possessing an artificial nervous system of growing sophistication. The idea began to emerge in my mind that it might be possible to build artificial brains, if only somehow one could put large numbers of evolved modules together to function as an integrated whole. I began to dream of building artificial brains.

However there was a problem with the above approach. Every time a new (evolved neural net) module was added to the simulation (on a Mac 2 computer) in the early 1990s, the overall simulation speed slowed, until it was no longer practical to have more than a dozen modules. Somehow the whole process needed to be speeded up, which led to the dream of doing it all in hardware, at hardware speeds.


Evolvable Hardware

A visit to an electronic engineering colleague at George Mason University (GMU) in Virginia USA, in the summer of 1992, led me to hear about FPGAs (Field Programmable Gate Arrays) for the first time. An FPGA is an array (a matrix) of electronic logic blocks, whose Boolean (and, or, not) functions, interblock and I/O connections can be programmed, or "configured", to use the technical term, by individual users, so if a logic designer makes a mistake, it can be quickly and easily corrected by reprogramming. FPGAs are very popular with electronic engineers today. Some FPGAs are S-RAM (Static RAM) based, and can therefore be reprogrammed an unlimited number of times. If the FPGA can also accept random configuring bitstrings, then it becomes a suitable device for evolution. This ephiphany made me very excited in 1992, because I realized that it might be possible to evolve electronic neural circuits at electronic speeds and hence overcome my problem of slow evolution and execution speeds in software on a personal computer. I began preaching the gospel of "evolvable hardware" as I called it, to my colleagues in the field of "evolutionary computation (EC)", which alternatively might be relabeled "evolvable software", or "E-Soft". Slowly, the idea caught on, so that in the year 2000, there have been several world conferences, and academic journals devoted to the topic are starting to appear.

The E-Hard field has been stimulated by the presence of a particular evolvable chip family manufactured by a Silicon Valley, California company called Xilinx, labeled the XC6200 series. This family of chips (with a different number of logic blocks per chip type) had several advantages over other reconfigurable chip families. The architecture of the chip was public knowledge (not a company secret) thus allowing researchers to play with it. It could accept random configuring bitstrings without blowing up (important for evolution which uses random bit strings), and thirdly and very importantly, it was partially recongfigurable at a very fine grained level, meaning that if one mutated only a few bits in a long configuring bit string, only the corresponding components of the circuit get changed (reconfigured), without having to reconfigure the whole circuit again. This third feature allowed for rapid reconfiguration, which made the chip the favorite amongst E-Harders. Unfortunately, Xilinx has stopped manufacturing the XC6200 series and is concentrating on its new megagate chip "Virtex", but the Virtex chip is less finegrainedly reconfigurable than the XC6200 family, so E-Harders are feeling a bit out in the cold. Hopefully, Xilinx and similar manufacturers will see the light and make future generations of their chips more "evolvable", by possessing a higher degree of fine grained reconfigurability. As will be seen below, we chose a Xilinx chip XC6264, as the basis for our work on building an artificial brain (before supplies ran out). The underlying methodology of this work is based on "evolvable hardware".


Neural Network Models

Before discussing the evolution of a neural model in hardware at hardware speeds, one first needs to know what the neural model is. For years, I had a vague notion of being able to put millions of artificial neurons into gigabytes of RAM and running that huge space as an artificial brain. RAM memory is fairly cheap, so it seemed reasonable to somehow embed neural networks, large numbers of them, into RAM, but how? The solution I chose was to use cellular automata (CAs). Two dimensional (2D) CAs can be envisioned as a multicolored chess board, all of whose squares can change their color at the tick of a clock according to certain rules. These cellular automata color (or state) change rules take the following form. Concentrate on a particular square, which has the color orange, lets say. Look at its 4 neighboring square colors. If the Upper square is red, and the Right hand square is yellow, and the Bottom square is blue, and the Left hand square is green, then at the next clock tick, the Central orange square will become brown. This rule can be expressed succinctly in the form -

IF(C=orange)&(U=red)&(R=yellow)&(B=blue)&(L=green)THEN(C=brown),

or even more succinctly, in the form -

orange.red.yellow.blue.green -> brown

Using thousands of such rules, it was possible to make CAs behave as neural networks, which grew, signaled and evolved [see figs. 1,2,3]. Some early experiments showed that these circuits could be evolved to perform such tasks as generating an output signal that oscilliated at an arbitrarily chosen frequency, that generated a maximum number of synapses in a given volume, etc. However, the large number of rules to make this CA based neural network function was a problem. The 2D version took 11,000 rules. The 3D version took over 60,000 rules. There was no way that such large numbers could be implemented directly in electronics, evolving at electronic speeds. An alternative model was needed which had very few rules, so few that they could be implemented directly into FPGAs, thus enabling the field of brain building by evolving neural net circuits in seconds rather than days as is often the case using software evolution methods.




Fig. 1 Older Complex Model of Cellular Automata Based Neural Network, Early Growth





Fig. 2 Older Complex Model of Cellular Automata Based Neural Network, Saturated Growth





Fig. 3 Older Complex Model of Cellular Automata Based Neural Network, Neural Signaling



The simplified model will be described in more detail, since it is the model actually implemented in our evolvable hardware. It is a 3D model, again based on cellular automata, but much simpler. A neuron is modeled by a single 3D CA cell. The CA trails (the axons and dendrites) are only 1 cell wide, instead of the 3 cell wide earlier model. The growth instructions are distributed throughout the 3D CA space initially [see fig. 4] instead of being passed through the CA trails [as in figs. 1,2,3]. The neural signaling in the newer model is 1 bit only, compared to the 8 bit signals in the earlier model. Such restrictions will lower the evolvability of the circuits, but in practice, one finds that the evolvabilities are still adequate for most purposes. In the growth phase, the first thing that is done is to position the neurons. For each possible position in the space where a neuron can be placed, a corresponding bit in the chromsome is used. If that bit is a 1, then a neuron is place at that position. If the bit is a 0, then no neuron is placed at that position.




Fig. 4 Newer Simple Model of Cellular Automata Based Neural Network, Saturated Growth



Every 3D CA cell is given 6 growth bits from the chromosome, one bit per cubic face. At the first tick of the growth clock, each neuron checks the bit at each of its 6 faces. If a bit is a one, the neighboring blank cell touching the corresponding face of the neuron is made an axon cell. If the bit is a 0, then the neighboring cell is made a dendrite. Thus a neuron can grow maximum 6 axons or 6 dendrites, and all combinations in between. At the next tick of the clock, each blank cell looks at the bit of the face of the neighbor that touches it. If that bit is a 1, then the blank cell becomes the cell type (axon or dendrite) of the touching neighbor. The blank cell also sets a pointer towards its parent cell - for example, if the parent cell lies to the west of the blank cell, the blank cell sets an internal pointer which says "west". These "pointers to parents (PPs)" are used during the neural signaling phase to tell the 1 bit signals which way to move as they travel along axons and dendrites.

This cellular growth process continues at each clock tick for several hundred ticks until the arborization of the axons and dendrites is saturated in the space. In the hardware implementation of this simplified model, the CA space consists of a 24*24*24 cube (the "macrocube") of 3D CA cells, i.e. roughly 14000 of them. At the 6 faces of the macrocube, axon and dendrite growth wraps around to the opposite macroface, thus forming a "toroidal" (doughnut) shape. There are prespecified input and output points (188 maximum input points, and 4 maximum output points, although in practice usually only one output point is used, to foster evolvability). The user specifies which input and output points are to be used for a given module. At an input point, an axon cell is set which grows into the space. Similarly for an output point, where a dendrite cell is set.

In the signaling phase, the 1 bit neural signals move in the same direction in which axon growth occured, and in the opposite direction in which dendrite growth occured. Put another way, the signal follows the direction of the pointers to parents (PPs) if it is moving in a dendrite, and follows the opposite direction of the pointers to parents (PPs) if it is moving in an axon.

An input signal coming from another neuron or the outside world travels down the axon until the axon collides with a dendrite. The collision point is called a "synapse". The signal transfers to the dendrite and moves toward the dendrite's neuron. Each face of the neuron cube is genetically assigned a sign bit. If this bit is a 1, the signal will add 1 to the neuron's 4bit counter. If the bit is a 0, the signal will subtract 1 from the neuron's counter. If the counter value exceeds a threshold value (usually 2), it resets to zero, and the neuron "fires", sending a 1bit signal to its axons at the next clock tick.


The CAM-Brain Machine (CBM)

The evolvable hardware device that will implement the above neural net model, is called a CAM-Brain Machine (CBM). CAM stands for a Cellular Automata Machine, and the term CAM-Brain implies that an artificial brain is to be embedded inside cellular automata. The CBM is a piece of special hardware that evolves neural circuits very fast. It consists largely of Xilinx's (programmable hardware) XC6264 chips (72 of them), which together can evolve a neural network circuit module in about 1 second. The CBM executes a genetic algorithm on the evolving neural circuits, using a population of 100 or so of them, and running through several hundred generations, i.e. tens of thousands of circuit growths and fitness measurements. Once a circuit has been evolved successfully, it is downloaded into a gigabyte of RAM memory. This process occurs up to 64000 times, resulting in 64000 downloaded circuit modules in the RAM. A team of "BAs" (Brain Architects) has already decided which modules are to be evolved, what their individual functions are, and how they are to interconnect. Once all the modules are evolved and their interconnections specified, the CBM then functions in a second mode. It updates the RAM memory containing the artificial brain at a rate of 130 billion cellular automata cell updates a second. This is fast enough for real time control of a kitten robot "Robokitty", described below.

The CBM consists of 6 main components or units described briefly here.

i) Cellular Automata Unit

The Cellular Automata Unit contains the cellular automata cells in which the neurons grow their axons and dendrites, and transmit their signals.

ii) Genotype/Phenotype Memory Unit

The Genotype/Phenotype Memory Unit contains the 100Kbit chromosomes which determine the growth of the neural circuits, The Phenotype Memory Unit stores the state of the CA cells (blank, neuron, axon, dendrite).

iii) Fitness Evaluation Unit

The Fitness Evaluation Unit saves the output bits, converts them to an analog form and then evaluates how closely the target and the actual outputs match.

iv) Genetic Algorithm Unit

The Genetic Algorithm Unit performs the GA on the population of competing neural circuits, eliminating the weaker circuits and reproducing and mutating the stronger circuits.

v) Module Interconnection Memory Unit

The Module Interconnection Memory Unit stores the BA's(brain architect's) inter module connection specifications, for example, "the 2nd output of module 3102 connects to the 134th input of module 63195".

vi) External Interface Unit

The External Interface Unit controls the input/output of signals from/to the external world, e.g. sensors, camera eyes, microphone ears, motors, antenna I/O, etc.

The CBM's shape and color is symbolic (see figs. 5,6). The curved outer layer represents a slice of human cortex. The grey portion which contains the electronic boards, represents the "grey matter" (neural bodies) of the brain, and the white portion which contains the power supply, represents the "white matter" (axons) of the brain.

The CBM and its supporting software packages were implemented by Genobyte Inc. in 1999, and actual research use of the machine began in December 1999. The results of this testing and the experience gained in using the CBM to design artificial brain architectures, may form the contents of a second article in Scientific American in 2+ years. Perhaps we will call it "Artificial Brain Architectures".




Fig. 5 CAM-Brain Machine (CBM) with Cover





Fig. 6 CAM-Brain Machine (CBM) Showing Slots for 72 FPGA Circuit Boards



Evolved Modules

Since the neural signals in the model implemented by the CBM use single bits, the inputs and outputs to a neural module also need to be in a 1 bit form. Fig. 7 shows a target (desired) output binary string, and the best evolved (software simulated) result, showing that the evolution of such binary strings is possible using our model. To increase the usefulness of the CBM, we created algorithms which convert an arbitrary analog curve into its corresponding bit string (series of 1s and 0s) and vice versa, thus allowing users to think entirely in analog terms. Analog inputs are converted automatically into binary and input to the module. Similarly the binary output is converted to analog and compared to analog target curves. Fig. 8 shows a random analog target curve and the best evolved curve. Note that the evolved curve followed the target curve fairly well only for a limited amount of time, illustrating the module's "evolvable capacity" (MEC). To generate analog target curves of unlimited time lengths (needed to generate the behaviors of the kitten robot over extended periods of time) multi module systems will need to be designed which use a form of time slicing, with one module generating one time slice's target output.






Target      000000000000000000000000000000 11111111111111111111




Evolved     000000000000000000000000000000 00011111111111111111









Target ctd.   000000000000000000000000 1111111111111111 00000000000000000000




Evolved ctd.  100000000000000000000000 0111111111111111 10000000000000000000






Fig. 7 Binary Target Output and Best Binary Evolved Output





Fig. 8 Analog Target Output and Best Analog Evolved Output



We have software simulated the evolution of many modules (for example, 2D static and dynamic pattern detectors, motion controllers, decision modules, etc). Experience shows us that their "evolvability" is usually high enough to generate enthusiasm. For EEs (evolutionary engineers) the concept of evolvability is critical.


The Kitten Robot "Robokitty"


In 1993, the year the CAM-Brain Project started, the idea that an artificial brain could be built by 2001 containing a billion neurons in an era in which most neural nets contained tens to hundreds of neurons, seemed ludicrous. Early scepticism was strong. We needed a means to show that an artificial brain is a valid concept to silence our critics. We chose to have the artificial brain control hundreds of behaviors of a cute lifesized robot kitten whose mechanical design is shown in fig. 9. This robot kitten "Robokitty" will have some 23 motors, and will send and receive radio signals to and from the CBM via antenna. The behaviors of the kitten are evolved in commercial "Working Model 3D" software (from MSC Working Knowledge, Inc.) and the results then used as target wave forms for the evolution of the control modules in the CBM. The evolution of motions in software at software speeds goes against the grain of the philosophy of evolvable hardware, but was felt to be unavoidable for practical rasons. Fortunately, the vast majority of modules will be evolved at electronic speeds. The kitten robot should be running around the lab controlled by its artificial brain in about 2 years. Judging by its many behaviors and the "intelligence" of its sensory and decision systems, it should be obvious to a casual observer that "it has a brain behind it", making the robot behave in as kitten like a manner as possible.




Fig. 9 Mechanical Design of Robot Kitten "Robokitty"



Short and Long Term Future

The immediate short term challenge, for the next 2 years or so, will be to use the CBM to create the artificial brain's modular architecture to control the robokitten. The very concreteness of the task, i.e. getting the kitten to execute its many hundreds of behaviors and to decide when to switch between them based on decisions coming from its sensory systems and internal states, will require a major effort, since 64000 modules need to be evolved. Of course, once work on the CBM has begun (from Xmas 1999 onwards), initial efforts will be with single modules, to see what the CBM can evolve. Once experience with single module evolution has been gained, interconnected multimodule systems will be built, with 10s, 100s, 1000s, 10,000s of modules, up to the limit of 64000 modules. If this job is to be completed in 2 years, assuming that it takes on average 30 minutes for an evolutionary engineer (EE) to dream up the function and fitness measure of a module, then a design team of 16 people will be needed. A million module, 2nd generation artificial brain will require roughly 250 EEs. Thus the problem of building such a large artificial brain would not only be conceptual but managerial as well. We envisage that within 5-10 years, if the first generation brain is a success, it is likely that large national organizations devoted to brain building will be created, comparable to the way Goddard's rockets began with 2 meter toys controlled by one man, to NASA, with tens of thousands of engineers and a budget of billions of dollars. We give such national scale brain building projects the labels A-Brain Project, (America's National Brain Building Project), E-Brain Project (Europe's), C-Brain Project (China's), J-Brain Project (Japan's), etc. Initially, these artificial brains will probably be used to create increasingly intelligent robotic pets. Later they may be used to control household cleaning robots, soldier robots, etc. Brain based computing may generate a trillion dollar world market within 10 years or so.

In the long term, 50-100 years from now, the situation becomes far more alarming. 21st century technologies will allow 1 bit per atom memory storage, and femtosecond (a thousandth of a trillionth of a second) switching times (bit flipping). Reversible logic will allow heatless computing, and the creation of 3D circuitry that does not melt. Potentially, asteroid sized, self assembling, quantum computers which would have a bit flip rate of 10 to power 55 a second could be built. The estimated human computing capacity is a mere 10 to power 16 bit flips a second, i.e. roughly a trillion trillion trillion times less. For brain builders with a social conscience, the writing is on the wall. I (de Garis) feel that the global politics of our new century will be dominated by the issue of species dominance. Should humanity build godlike "Artilects" (artificial intellects) or not. I foresee a major war between two human groups, the "Cosmists", who will favor building artilects, for whom such an activity is a science-compatible religion, and the big-picture destiny of the human species - and the "Terrans", who will fear that one day, artilects, for whatever reason, may decide to exterminate the human race. For the Terrans, the only way to ensure that such a risk is never undertaken, is to insist that artilects are never built. In the limit, to preserve the human species, the Terrans will exterminate the Cosmists, if the latter threaten to build artilects. With 21st century weaponry, and extrapolating up the graph of the number of deaths in major wars over time, we arrive at gigadeath. One of the major tasks of todays brain builders is to persuade humanity that such a scenario is not a piece of dismissable science fiction, but a frightening possibility. Some brain builders will stop their work due to such worries. Others, will continue, driven by the magnificence of their goal - to build artilect gods. When the nuclear physicists in the 1930s were predicting that a single nuclear bomb could wipe out a whole city, most people thought they were crazy, but a mere 12 years after Leo Szilard had the idea of a nuclear chain reaction, Hiroshima was vaporized. The decision whether to build artilects or not, will be the toughest that humanity will have to face in our new century. Humanity will have to choose between "building gods, or building our potential exterminators".

-------------

The Authors

Prof. Dr. Hugo de Garis

Prof. Dr. Hugo de Garis was head of the Brain Builder Group at ATR Labs in Kyoto, Japan, from 1993 until February 2000. Since then he is continuing the same work at STARLAB, in Brussels, Belgium. (http://www.starlab.org). He is the father of the rapidly growing research field "evolvable hardware", a concept he got off the ground in 1992. He uses evolvable hardware techniques to evolve neural network circuit modules at electronic speeds using FPGA based hardware. He is assembling 64000 of these modules in RAM to build a 75 million neuron artificial brain. He obtained his PhD in artificial nervous systems in 1991 from the University of Brussels (ULB), Belgium, and is an adjunct professor in computer science at Utah State University (USU) in the US, and at Wuhan University, China.

Dr. Michael Korkin

Dr. Michael Korkin received his M.S. degree in Computer Systems Engineering from MIIT, Moscow, Russia, in 1982, and his Ph.D. degree in Digital Image Processing from MPI, Moscow, in 1988. In 1991-1997 he worked as a Senior Hardware Engineer at a medical imaging firm in Denver, Colorado, USA. He founded his company Genobyte Inc. (http://www.genobyte.com) in 1997 in Boulder, Colorado. His primary research interests are evolvable hardware, artificial brain building, and neuroscience.


Further Reading

"Building an Artificial Brain Using an FPGA Based CAM-Brain Machine", Hugo de Garis, Michael Korkin, Felix Gers, Eiji Nawa, Michael Hough, Applied Mathematics and Computation Journal, Special Issue on "Artificial Life and Robotics, Artificial Brain, Brain Computing and Brainware", North Holland, 1st quarter, 2000.

"The CAM-Brain Machine (CBM) : Real Time Evolution and Update of a 75 Million Neuron FPGA-Based Artificial Brain", Hugo de Garis, Michael Korkin. Journal of VLSI Signal Processing Systems (JVSPS), Special Issue on Custom Computing Technology, 1st quarter, 2000.

"Review of Proceedings of the First NASA/DoD Workshop on Evolvable Hardware", Hugo de Garis, IEEE Transactions on Evolutionary Computation, Nov 1999, Vol. 3, No. 4.

A comprehensive list of the authors' journal articles, conference papers, book chapters and world media reports, can be found at their respective web sites.
http://www.starlab.org,
http://foobar.starlab.net/~degaris
http://www.genobyte.com

All rights reserved | Info@Atomasoft.com | Powered by Zope