[Comp-neuro] spontaneously emerging cognitive capabilities

Mario Negrello mnegrello at gmail.com
Fri Aug 15 23:32:16 CEST 2008


> Dear Claudius, List,

> To be specific, we found that letting a system with a well
> defined autonomous neural activity interact with sensory
> inputs, results in the system performing a non-linear
> independent component analysis of its own. We believe this
> result to be remarkable. The standard approach is to write  
> specialized codes for algorithms like the ICA, and not to
> obtain them as spontaneously emergent cognitive capability,
> resulting just from the general meta-principles used for the
> layout of the neural architecture.

I found something similar, using an approach that may cause frowns of  
disdain, or complacent smiles, but here it goes.
Using evolutionary robotics paradigm, i evolved 3D ODE physical  
simulations of holonomic robots (3 wheels) with a pan tilt camera (4x4  
pixel retina). The task was to follow some moving objects, while  
avoiding others, in a simulated environment. I will not go into the  
complications of the methods, but will summarize some results of the  
networks analysis i performed on the robots that were able to solve  
the task.

It turns out that no matter how large the hidden layer (up to 40  
recurrent sigmoidal units), no matter how complex looking the  
attractors are in phase space, the PCA of the hidden units reveal that  
there is a massive reduction of dimensionality. Essentially, PCAs of  
the activities of the hidden layer during behavior are only two(!)  
dimensional. Justifiably or not, I was staggered (but i acknowledge  
that some may think this is trivial). From the perspective of the  
motor units (which output force, so acceleration), the 16 sensory  
units, mapping to a recurrent hidden layer of another 40 (so, very  
complex attractors and transients), perform a massive reduction of  
dimensionality, which would have been concealed had i not performed  
the pcas. This is far from standard practice in the field of neurally  
controlled evolutionary robotics. We thrive in thinking that  
attractors are supercomplex.

I think this is interesting because it was a product of structural  
evolution of the networks (units, connections, weights, biases) not of  
gradient descent, which would have surprised me less.

Someone might point out that, in a sense, i only got out what i put  
in, that is, two motor units, two dimensions in the pca. But you see,  
(1) the networks were evolved structurely, and (2) every motor unit  
receives from a number (up to 20) of its peers in the hidden layer,  
sort of like in liquid state machines or echo state networks, where  
the output is a linear combination of the hidden layer. (although in  
my case i have recursions also in the output units). Though I can, to  
be sure, invent explanations based on embodiement and reduction of  
dimensionality, this result was still surprising.

In which case, i would like to extrapolate and ask a couple of  
questions, regarding the networks of simple organisms, such as insects  
or worms.

Recently, Jim made two points concerning modelling and complexity of  
biology, citing a fellow of his, who studied extensively the neurons  
of some worm to very minute anatomical details. Took him his best  
years. To this Jim said (1) detailed biological models are more  
informative, (2) mathematical abstractions fall short in biology. Good  
points to be taken very seriously.

Questions:

1.  True, the networks are organized around the function i prescribed,  
but I still hadn't seen it coming. Anyone wants to comment on my lack  
of foresight? Is this trivial?
2. Is it possible that organismic networks of significant complexity  
at a variety of magnification factors (from molecules to networks),  
may in fact, from the perspective of motor behavior, be in fact  
simple? Their networks surely are complex, but must their attractors?  
(before someone rushes in with various imprecations, i realize that  
motor behavior is not the only thing organisms do).
3. Do you believe in abstract principles that organize neural systems  
of living beings, as for instance in a balancing equation matching  
organisms and environment?
4. Is this a lump of BS? (pardon my french)

A good weekend,

|=============]M[==============|
| www.firingrates.blogspot.com |

If the human brain were so simple that we could understand it,
we would be so simple that we couldn't.
-- Emerson Pugh



More information about the Comp-neuro mailing list