Ersatz Brain Project Logo

Feb.1, 2006
SBIR funding from AFRL
(full story)
 > Project Details > Hardware Considerations Team Login| Logout _

2. Hardware Considerations:

2.1. Engineering and Size Considerations.

We feel that there is a size, connectivity, and computational power “sweet spot” about the level of the parameters of the network of network model. If we equate an elementary attractor network with 104 actual neurons, that network might display perhaps 50 well-defined attractor states. Each elementary network might connect to 50 others through state connection matrices. Therefore a brain-sized artificial system might consist of 106 elementary units with about 1011 (0.1 terabyte) total numbers involved in specifying the connections. Probably the most difficult technical issue from the point of view of computer science will be the integration of memory and the individual CPU’s, a design problem in all forms of supercomputing. Assume each elementary unit has roughly the processing power of a simple CPU. If we then assume 100 to 1000 CPU’s can be placed on a chip there would be a total of 1000 to ten thousand chips in a brain sized system. These numbers are large but within the upper bounds of current technology. Smaller systems are, of course, easier to build.

2.2. Proposed Basic System Architecture.

Our basic computer architecture consists of a potentially huge (millions) number of simple CPUs connected to each other and arranged in a two dimensional array. The 2-D arrangement is important because it is simple, cheap to implement, and corresponds to the actual 2-D anatomy of cerebral cortex. This intrinsic 2-D topography makes use of the many 2-D spatial data representations used in cortex for vision, audition, the skin senses and motor control.

2.2.1. CPU Communications.

The brain has extensive local and long-range communications. The brain is unlike a traditional computer in that its program and its computation are determined primarily by its connections. Details of these relatively sparse interconnections are critical to every aspect of brain function. And for our system, the details of CPU connectivity are equally critical.

2.3.2. Short-range connections.

There is extensive local connectivity in cortex. An artificial system has many options. The simplest is NEWS connectivity. Our experience has been that expanding NEWS to include at least the diagonal CPUs works significantly better. The easiest useful CPU connection pattern to implement is shown in the left Figure. A richer set of local and longer-range connections (See right figure) would be even more desirable. Interestingly, this last connection pattern duplicates one of the common connection patterns seen in cortex.

2.2.3. Long-range connections.

Many of the most important operations in brain computation involve pattern association where an input pattern is transformed to an associated output pattern that (as Aristotle observed) can be very different from the input. A sensory stimulus connected to a motor response would fall into this category. Anatomically, one group of neurons connects to another group. The functional connections consists of information transmitted through physical links, that is, axons, structures that are thin cell processes a few microns across that can be over a meter long in humans. Some neuroscientists have suggested that the cortex has limited “projection depth”, that is, perhaps seven to nine pattern associations occur moving from sensory input to motor output, though there are complex feed forward and feed back anatomical projections in cortex to complicate the picture. (Creutzfeld, 1983).

Critical hardware and software issues arise in the way long-range connections are handled in our artificial system. It would be difficult and expensive to physically interconnect large numbers of CPUs. Long-range busses are slow and complex and their use would violate the desired simplicity of our design. Exotic hardware solutions like optical busses or wireless interconnects are not presently viable.

Therefore our design decision is to use only local inter-CPU connections and build more general pattern association into the system with software. Such a solution trades off hardware simplicity against software complexity. The price paid is significant communication delay and a substantial increase in volume of data communication. Because connections in the brain are sparse, the amount of data communication may be manageable for most brain-like computations. However systems where this restriction does not hold, for example, general matrix operations, may not compute rapidly because of communication delays and our architecture would be less valuable for this class of problems.

2.2.4. CPU Properties.

The CPUs have to handle two quite different sets of operations. First, and perhaps most critical, are communications with other CPUs. Much of the computational time and effort in brain-based computation is in data communication: getting the data to where it needs to be. Second, when the data arrives, it is then used for numerical computation. A very simple RISC like instruction set should be more than adequate. However, we need real numbers for these calculations, not the single bit CPU’s that characterized the first Connection Machine and that led to severe software problems.

2.2.5. Sensory Input Processing.

The brain is, above all, a sensory processor. Sensation and perception are more highly evolved and presumably bug-free than the recently acquired cognitive abilities of our species. One virtue of the approximations we have made is to provide a potential conceptual connection between the complex sensory preprocessing that we know occurs before cortex and the discrete, attractor based cognitive computations occurring in cortex. This assumption means that sensory preprocessing takes place largely independently of the cortex, can be optimized independently, and presents its results to the CPU array in the form of a set of initial weights of the CPU attractor states.