[Comp-neuro] Re: Attractors, variability and noise

Igor Carron igorcarron at gmail.com
Fri Aug 15 18:29:34 CEST 2008


Robert,

Your example of astronomy parallels many different kinds of engineering
fields. One that I am knowledgeable about is nuclear engineering (nuclear
engineering stands for the engineering that goes into the building of
nuclear reactors producing electricity). There has been a similar tendency
to build very complex models for the past fifty years and then check these
models at every level in order to be confident that they are behaving as
they should. However, the more complex the models, the more likely the list
of parameters grows. Eventually, the growing complexity comes from the
integration of the physics of neutron transport with that of computational
fluid dynamics (orders of magnitude difference in scale). This integration
is required in order to make sure you understand fully how a nuclear reactor
core works. If you know either, you know that the models have to be somewhat
coarse because there is no way you will be able to make out whether the new
integrated software can handle different known and unknown instances.

This situation has led to several benchmark exercises where a situation has
been run on some experimental reactor and the findings have been embargoed
so that the different national labs would run similar conditions and find
out if their models were fitting these very complex model responses.

While this has led to a specific degree of confidence for situations that
are known, there is always some tweaking required by this or that model to
fit the latest benchmark exercise.

Eventually, as in the brain question, the issue is how can one become
comfortable with a very complex system on one hand and a very complex model
on the other ?

Can one map the expected behavior of a normal brain with the expected
behavior of a complex model for a large part of the parameter space ? How do
you explore a large part of the model parameter space when you have a
smaller set of brain data ?

Can two different complex models yield similar results/behavior ? In the
affirmative, what criterion do you use to favor one over another ?

Cheers,

Igor.


------------------------
Igor Carron, Ph.D.
http://nuit-blanche.blogspot.com


On Wed, Aug 13, 2008 at 1:23 PM, Robert Cannon <robert.c.cannon at gmail.com>wrote:

>
>
>> As I understand the  other end of the spectrum, we construct increasingly
>> realistic models and end up with a simulated brain without a real
>> understanding of how it works, which makes no sense to me.  Understanding is
>> what we're after, and that understanding can only reside in the brains of
>> the population of scientists, not in their models.
>>
>
> Brad's point is fascinating - not least because I couldn't disagree more.
> :)
>
> I do like the notion of understanding, but I suspect it is also somewhat
> self-indulgent, because there may not be a level on which it can be shared
> above that of working models.
>
> To help explain why, when I was working in astronomy there was a
> feeling among many of my colleagues that there should be a moratorium
> on publication of papers purporting to explain a particular classical
> phenomenon because the type of explanations being sought couldn't
> actually exist. The problem is a fairly basic bit of astrophysics -
> the transition of many stars from dwarfs to giants for the last tenth
> of their active lives. There is no mystery here: there are
> half a dozen equations and a bucket-full of well known physical data.
> You implement them on a computer and you get something that behaves pretty
> much like a real star.  Then you've got your "prodable brain"
> equivalent and it is natural to seek higher level,
> intuitive, easily communicated, mathematically elegant explanations
> of what's going on. Quite a few (mutually incompatible) explanations
>  were published.
>
> The whole game unraveled however when people began addressing
> "what-if" questions with these models. By definition, the explanations
> are insensitive to quantitative details (like the opacity or
> pressure-density relationship for stellar material) but it turns out
> that if you compute what would happen with slightly different physics
> (bigger gravitational constant, different opacities etc) then stars
> don't necessarily turn into giants. So they are not actually insensitive
> to the quantitative details. In effect the parameter
> space is lumpy and we're in a particular patch (of course, you can
> theorize about why we have to be in that patch but that's another
> question entirely). Elegant explanations assume smoothness but
> the space isn't smooth, so no such explanations can be correct.
>
> Another observation was that when you ask people to predict
> the outcomes of these what-if questions (about the only type of
> experimental intervention that is possible in astronomy) then the people
> who write and run the programs often do better than the theorists.
>
> So, like most areas of expertise, you can develop an intuitive
> understanding and internal model of the domain by years of
> application, but there's no short cut - you can't get it from a
> book. Other people who want the same abilities will have to get
> there in the same way by internalizing the same mass of data.
>
> My point is that for this particular problem, high-level theory is
> not much use. Some of it is epiphenomenal, and the rest is just plain
> wrong. The models work fine but they are too complicated to run in
> your head. The simpler things that you can run in your head or on
> paper are too coarse to be any use.
>
> My personal guess, which I realize is deeply unpopular, is that this
> also applies to much of neuroscience. If we do have a simulated brain,
> then it will have been built using a vary large volume of data, a few
> equations and a lot of extremely sophisticated software
> engineering. I'm not sure there will be any point in looking for
> theories at a higher level than the design documents and software
> architecture that went into making it. If such complexity reduction
> were possible, then you'd hope the engineers have got there by then.
>
> The issue of whether, when you have a 64 bit floating point unit at
> your disposal instead of a mass of synapses and ion channels, you can
> make an equivalent but more mathematical and easily computed version
> of a purkinje cell seems like a case of premature optimization
> (or perhaps of engineering expediency) driven by todays prevalent
> technology, not a question of durable scientific relevance.
>
> As I see it, the area that needs the most attention, (both funding
> and education) but receives practically none, is not maths, but how on
> earth we develop the software engineering and data management
> concepts, languages and technologies that will enable us to build the
> next n generations of models.
>
>
> Robert
>
>
>
>
>
> _______________________________________________
> Comp-neuro mailing list
> Comp-neuro at neuroinf.org
> http://www.neuroinf.org/mailman/listinfo/comp-neuro
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.neuroinf.org/pipermail/comp-neuro/attachments/20080815/8dc00230/attachment.html


More information about the Comp-neuro mailing list