Ersatz Brain Project Logo

Feb.1, 2006
SBIR funding from AFRL
(full story)
 > Project Details > Technical Issues Team Login| Logout _

1. Technical Issues:

1.1. First Extreme.

1.1.1. Biological Realism.

The human brain is composed of on the order of 1010 neurons, connected together with at least 1014 neural connections. These numbers are likely to be underestimates. Biological neurons and their connections are extremely complex electrochemical structures that require substantial computer power to model even in poor approximations. Worse, there is good evidence that at least for mammalian cerebral cortex a bigger brain is a better brain. The more realistic the neuron approximation, the smaller the network that can be modeled.

1.1.2. Neural Networks.

generic neural net unit
Generic Neural Net Unit

The most successful brain based computational models have been neural networks. These systems are built from simple approximations of biological neurons, basically nonlinear integrators of many inputs. (See Figure.) Even with these drastic approximations such units can be used to build systems that can be made reasonably large, can be analyzed mathematically, can be simulated easily, and can display complex enough behavior so that they have successfully modeled a number of important aspects of human cognition as well as having a number of practical applications.

1.1.3. Network of Networks.

An intermediate scale neural network based model we have worked on here at Brown is the Network of Networks. It assumes that the basic computational element in brain-like computation is not the neuron but a small network of neurons. These small (conjectured to be 103 -104 neurons) networks are nonlinear dynamical systems and their behavior is dominated by their attractor states. Basing computation on network attractor states reduces dimensionality of the system, allows a degree of intrinsic noise immunity, and allows interactions between networks to be approximated as interactions between attractor states. Basic modular interactions are similar to the generic neural net unit except scalar connection strengths are replaced by state interaction matrices. (See Figure.) There might be 10 to 100 attractors in a basic network. Because attractors are derived from neuron responses, it is potentially possible to merge easily neuron-based preprocessing with attractor dynamics.

1.1.4. Problems.

Computer requirements for large neural networks are substantial. Worse, highly connected neural nets tend to scale badly, order n2 where n is the number of units. Little is known about the behavior of more biologically realistic sparsely connected networks. It is currently impossible to build a neural network with anywhere near the size and connectivity of cerebral cortex. There are no practical applications of biologically realistic neurons.

1.2. Second Extreme.

1.2.1. Associatively Linked Networks.

The second class of brain-like computing models is so much a part of traditional computer science that it is often not appreciated that it also serves as the basis for many models in cognitive science and linguistics, that is, associatively linked structures. Probably the best-known example of such a structure is a semantic network. Such structures in the guise of production systems underlie most of the practically successful applications of artificial intelligence as well as any computer applications using tree search where nodes are joined together by links. Models involving nodes and links have been widely applied in linguistics and computational linguistics. WordNet is a particularly pertinent example where words are partially defined by their connections in a complex semantic network.

Computation in such network models usually means traversing the network from node to node over the links. The Figure shows an example of computation through what is called spreading activation. The simple network in the Figure concludes that canaries and ostriches are both birds.

Computer implementations are often straightforward and efficient. WordNet captures a significant fraction of the semantic relationships of English in a network with less than 200,000 nodes. In these systems, the number of links from a node is manageable, values ranging from one for linked lists, to two for binary trees, up to a few dozen for ambiguous words in language based networks.

1.2.2. Problems.

Associatively linked nodes form a valuable and efficient class of models. However, linked networks, for example, the large trees arising from classic problems in Artificial Intelligence, are prone to combinatorial explosions, are unforgiving of noise, ambiguity, and error, and often require precisely specified, predetermined information. It can be very difficult to make the connection to low-level nervous system behavior, that is, to sensation and perception. Ambiguity is a particular problem in language. Most words are ambiguous. This fact causes humans no particular difficulties but it is hard for simple associative networks to deal with. It was the inability to connect abstractions to the real world that was a major reason for the limited success of “classic”, 1970’s Artificial Intelligence. Similarly, inability to deal effectively with ambiguity limited our ability to do natural language understanding or machine translation for decades.