3.
Software:
Cognitive software applications for natural language, text understanding,
and commonsense reasoning will become essential as time passes
but we do not currently know how to write software with anything
like the properties or power of human cognition. We have argued
here in favor of a particular approach to building a brain like
computer. Let us briefly discuss appropriate software and its
requirements.
3.1. Arithmetic on a BrainLike Computer.
We have one workedout demonstration of the way advanced cognitive
software might be structured (Anderson,
1998). Consider elementary arithmetic. It can be considered
strictly as a set of abstract operations on numbers, for example,
we could consider the positive integers as a onedimensional linked
list, the approach taken by Peano’s Postulates. However
humans, in their interactions with number do not work this way.
Compelling experimental evidence suggests that humans operate
with numbers much as they do with sensory magnitudes like light
intensities, weights, etc. Therefore the nodes representing number
are not simple, but contain important internal structure representing
sensory magnitude, and in addition, complex associations involving
number name, spelling, character shape, etc. Although these “extraneous”
properties are not needed to get the correct answers to elementary
arithmetic, they seem to be essential for what humans call “mathematical
intuition,” the rich set of refined perceptions that underlie
both arithmetic and higher mathematics. If we wish to make an
artificial system work with abstractions the way we do, it will
be necessary to understand how humans do it.
3.2. Expanded Semantic Network Nodes.
Consider a semantic network. One natural computer implementation
is to let a node equal a basic computational unit, a CPU, as explicitly
assumed in Hillis’ book, The Connection Machine. However
real concepts contain a complex internal structure with powerful
sensory components, as we saw for arithmetic. Let us assume that
the nodes in our network must be expanded to be both realistic
and, we claim, become a flexible and powerful computing tool.
Below we discuss the computational and representational assumptions
that follow from the idea of expanded nodes. The genesis of these
assumptions in cortical physiology and anatomy is obvious.
3.3. Expanded Nodes: Representational Assumptions.
• Regional structure. The 2D CPU array is divided into
regions. Each region performs one, or a small number, of computations.
• Topographic organization of regions. Sensory topography
can map directly into the CPU arrangement. Biological examples
are vision (topographic maps), audition (frequency maps), and
the skin senses (body surface maps).
• Data Representation reflects and makes use of this topography.
Since we assume many interactions are local, the computation works
best when the necessary data is represented locally. The sensory
representations display sparse coding. The connections between
modules and between regions are sparse matrices.
• Both “Analog” and “Digital” processing
are essential. Continuous valued activity patterns sum, interact,
and decide on a discrete output (attractor) that is surrounded
by a penumbra of weaker activities that reflect, sum together,
and provide context. The penumbra can be the result of past activities
fading, coming activity strengthening and weak associations.
• Sensory processing is done by specialized sensory structures
in the sense organs and earlier pathways. The array works with
highly and effectively processed information. Not all modules
receive sensory inputs.
The Figure shows the large modules plus a few individual CPU’s
of the million or so. These modules are small groups of neurons
if we accept the Network of Networks assumptions.
There are two ways CPUs interact.
In the first, local lateral transmission leads to the formation
of traveling waves and interference patterns. There is some evidence
for such mechanisms, for example, medial axis representations
and effects like those found by Kovacs and Julesz (1994)
In the second, there is a traditional projection system, where
potentially any CPU in the target region can receive input from
any CPU in the source region. In cortex, these projection systems
are nearly always bidirectional, that is, there is both massive
upward and downward neural projection.
3.4. Expanded Nodes:
Computational Assumptions: We believe computation in a brainlike
computer will not be based on logic but on association, summation,
and discretization.
• Association is the key computational operation. There
is no basic mechanism performing traditional logic.
• Computational dynamics include a linear and a nonlinear
stage. We claim the system needs and uses both. The transition
from considering many answers in the linear stage to accepting
only one in the nonlinear stage is a key step in the computation:
weighing evidence and deciding. Nonlinear attractor dynamics
discretize an underlying continuous system.
• Formation of a stable assemblage corresponds to the result
of computation. Strong stable assemblages  we might call them
concepts – can have linguistic tags, that is, words. Complex
assemblages can formed from cooccurrence or sum of simple ones
and can give rise to their own associations.
Assemblages act much like Hebbian cell assemblies (Hebb,
1949). It has proven difficult to simulate useful and stable
Hebbian cell assemblies based on individual neurons and the idea
has not been popular recently. However, we conjecture using groups
of neurons as in NofN will allow construction of stable assemblages
through associative learning. We now have a better idea of the
properties of gain control circuitry in cortex, small networks,
and the virtues of sparse coding than did Hebb. Some optical imaging
evidence on the way that inferotemporal cortex codes complex objects
is consistent with our assumptions about small networks and assemblage
formation. (See Tsunoda, et al., 2001;
Tanaka, 2003).
Context is a key cognitive computational mechanism. Context arises
from both spatial (other brain regions) and temporal (past activity
in the same and other regions). Every assemblage has a penumbra
of lowlevel associations that we feel perform most of the work
of contextual disambiguation and can provide associative flexibility
when combined with “analog” control structures that
model cognitive mechanisms such as attention.
