[Comp-neuro] Re: Attractors, variability and noise
r.gayler at gmail.com
Tue Aug 19 04:33:26 CEST 2008
> what we should be doing is publishing the models and building the text and
figure descriptions of its behavior around them.
> Why shouldn't figures for publication be generated from the model itself?
Why shouldn't textual descriptions be directly
> linked to model components. And why shouldn't one be able to link to the
experimental datasets from which the model was tuned?
That is precisely the thrust of the reproducible research movement that I
mentioned in a slightly earlier post. For example, the
r> &context=bioconductor) want the research paper to effectively
contain all the code and data to allow the reader to regenerate (and
experiment with) the empirical results.
> Is there any reason in principle, for example, that some of the peer
review of such publications couldn't be done automatically?
> What would that do to the geo-political mess of peer review that we all
now live with
Are you seriously suggesting that any current peer-reviewer would go to the
bother of attempting to verify claimed results?
I'm not convinced that is part of the peer-reviewer's role and even less
convinced that automating peer-review is feasible
(except to verify that the final form of the article can be regenerated from
the source code). Claerbout in the third paragraph of
http://sepwww.stanford.edu/research/redoc/IRIS.html states "I have three
textbooks on reflection seismology containing 243
illustrations that are computed from data and theory. I routinely erase all
these 243 figures, and then I recompute them.
For a PhD thesis to be accepted by me, I routinely remove the computed
figures and recompute them."
You could probably also tie this discussion thread into the Open Access
publication movement http://en.wikipedia.org/wiki/Open_access
Their interest in alternative peer review models might tie into the notion
that making computational research reproducible
via the publications would allow every reader to verify (if they wished) the
results claimed and also to experiment with
modifications to the experimental preparations to assess whetehr the results
claimed depend crucially
on the exact parameters chosen by the authors.
> What would that do to the geo-political mess of peer review that we all
now live with -
> where some would say the tyranny of ideas has all but stifled advancement
in our science?
You might be interested in the work of Don Braben
He arges that current funding mechanisms and peer review conspire to
suppress revolutionary research.
He maintains that researchers like Planck and Einstein would never have been
His views are not just vapour-ware. For some years he ran a large'ish
funding scheme that specifically aimed
to fund revolutionary research that could not attract standard funding. The
case studies on his website make
for interesting reading
(http://www.frontier.co.uk/VentureResearch/Table.html). You will need to
copy of his most recent book to read the extended descriptions.
From: comp-neuro-bounces at neuroinf.org
[mailto:comp-neuro-bounces at neuroinf.org] On Behalf Of james bower
Sent: Friday, 15 August 2008 6:32 AM
To: jbednar at inf.ed.ac.uk
Cc: comp-neuro at neuroinf.org
Subject: Re: [Comp-neuro] Re: Attractors, variability and noise
The definition I proposed for reaslistic models previously was agnostic with
respect to the form of the models --yes we use HH compartmental models at
the single cell level, but we have used other forms too - including even
forms that are much more Bard like than Bower like (c.f. Crook, S.M.,
Ermentrout, G.B., and Bower, J.M. (1998) Spike frequency adaptation
affects the synchronization properties of networks of cortical oscillators.
Neural Computation. 10:837-854.).
Realistic modeling is an attitude and an approach -- not a particular form
of model -- In some sense it is like pornography - you know it when you see
it. However, intentionally representing neurons in a form that is
convenient for the function you are imposing, is a dead give away that one
is involved in modeling as proof of concept, rather than modeling as a tool
to figure out what you don't know.
Ultimately, it is the brain itself that will dictate what tools are
appropriate, and the nature of the understanding that can be achieved.
In conversations with my friends who have stayed in "Physics", I have often
been asked why anyone would ever want to take on the study of the brain,
given the clear insuffiency of the tools available.
I sometimes answer that it is clear from the history of science that new
tools follow new problems. Often the response has been -- "sure -- but why
not study some intermediate system, like for example the weather or ocean
currents, or something that can only be done with numerical simulations
(rather than closed form solutions), but at the same time can relay on
physics we understand (thermodynamics, turbulence, etc)." "Why try to make
such a large step into the unknown"
Really, of course, the truth is that we are all interested in the brain, and
have been foolish enough to think that we can start to study it now, absent
Given that, however, as I have said already in this discussion, one has to
be very careful about the extent to which available tools dictate how you
think about the system -- call this the "Tyranny of the Tools".
I do think we can say something already about the kinds of tools we need to
continue to make progress - and I agree with several previous comments, that
those tools are all tied up with the technology of numerical simulations,
including, for example, mechanisms for crossing scales, understanding
complex parameter spaces, building the simulations themselves, visualizing
the results, comparing different models and their performance, cross
simulator operations, parallelization, and importantly, using simulations
themselves to educate new generations of neuroscientists both in how to
build simulations and also about our current state of knowledge about
brains. This is why the Book of GENESIS was written in two parts, the first
part using GENESIS-based models to teach neuroscience and the second part
showing how those models can be changed to do further research.
I would say, however, that one of the most important tool-based innovations
we need to move forward, is to replace our current methods of publishing
results with a form of publication appropriate for neurobiology. It is no
coincidence that Newton and his colleagues, at the same time they were
developing the theoretical basis for modern physics, also invented the
modern scientific journal. The problem is that the form of that journal (a
few pages, some figures, some equations) is a completely inappropriate way
to represent knowledge about complex biological systems and realistic models
in particular. Neuro-DB is a step in the right direction, but only a baby
step. With Neuro-DB one can 'publish' the model on which a journal article
is published. However, what we should be doing is publishing the models and
building the text and figure descriptions of its behavior around them. Why
shouldn't figures for publication be generated from the model itself? Why
shouldn't textual descriptions be directly linked to model components. And
why shouldn't one be able to link to the experimental datasets from which
the model was tuned? Is there any reason in principle, for example, that
some of the peer review of such publications couldn't be done automatically?
What would that do to the geo-political mess of peer review that we all now
live with - where some would say the tyranny of ideas has all but stifled
advancement in our science?
Once a model is submitted for publication, why can't we figure out how to
automatically test it for important properties like: 1) how fragile are
the principle results with respect to key parameters; 2) how accurately does
the model replicate the data? 3) does one set of parameters REALLY produce
all the results? One could even imagine some formal (?Baysian?) basis for
a comparison between data and modeling results, or even determining whether
a new model is significantly better or even different from prior models.
(cf. see Baldi, P., Vanier, M.C., and Bower, J.M. (1998) On the use of
Bayesian methods for evaluating compartment neural models. J. Computational
Neurosci. 5: 285-314) Obviously, given the structure of GENESIS (or Neuron),
one could even, in principle, automatically track the antecedents to the new
model -- and start to apply a more rigorous standard for individual
contributions or the lineage of ideas.
I personally believe that this change in publication process will eventually
be critical for us to advance.
As some of you are aware, we have spent the last 5 years working on a
fundamentally new design for the GENESIS project -- we will shortly be
publishing several papers describing that design. We will also start
releasing beta versions for testing. Core objectives for the redesign are:
simulator interoperability, integrated multi-scale simulations, and
model-based publication. We are also currently planning a meeting in San
Antonio in March of next year, to work on practical aspects of all of the
above with other simulator and middle wear developers --
On Aug 13, 2008, at 6:39 PM, James A. Bednar wrote:
| From: comp-neuro-bounces at neuroinf.org On Behalf Of jim bower
| Sent: 01 August 2008 13:45
| To: bard at math.pitt.edu
| Subject: Re: [Comp-neuro] From Socrates to Ptolemy
| But, biology is different and the difference and the conflict is
| perhaps best indicated in the difference between abstracted models
| and "realistic" models.
| First, I would define realistic models not only as those that
| include as much of the actual structure as possible, but also and
| perhaps most importantly, models that are "idea" nuetral in their
Your written definition of a "realistic" model sounds reasonable to
me, but in practice it seems like you only consider multicompartmental
Hodgkin/Huxley models to be realistic (based on this discussion and
your talks at previous GUM/WAM-BAMM meetings). Is that true?
I agree that multicompartmental models are expressed at the right
level for addressing questions about single neurons. However, I can't
agree that such models are "idea neutral" or "not abstracted", because
in practice they rely on an idea that the rest of the brain can
reasonably be modeled extremely abstractly, e.g. as a set of spike
trains following some simple distribution. Such an assumption only
makes sense to me if the research questions are all about the behavior
of that one neuron, rather than the coordinated behavior of the large
populations of neurons that underlie functions like early vision in
mammals (my own area of interest).
For understanding visual processing, "including as much of the actual
structure as possible" means including the hundreds of thousands or
millions of neurons known to be involved. When it is not possible to
build tractable multicompartmental models of that many neurons (due to
the enormous numbers of free parameters and the computational
complexity), there is no way to avoid relying on an idea or
abstraction. One must choose either to simplify the neuronal context
(by modeling only a few neurons at a very detailed level), or to
simplify the models of each neuron. Someday perhaps we might not have
to make such a choice, but certainly today and for the foreseeable
future we do.
Which option to choose, then, depends on the question being asked,
which depends on the experimental techniques available and on the
modelers' "ideas" and "abstractions" about what is important. If
trying to understand and explain detailed measurements of single
neurons, then sure, build multicompartmental models of single neurons,
with an embarrassingly simplified representation of the rest of the
brain (and of the environmental and behavioral context). But if
trying to understand data from large neural populations, e.g. from
2-photon calcium imaging of patches of hundreds or thousands of
visually driven neurons, or from optical imaging of even larger areas,
then a "realistic" model should contain huge numbers of simpler models
of the individual neurons, so that the large-scale behavior can be
matched to the data available. Such forms of imaging data don't
provide any meaningful constraints on the parameters of a
multicompartmental model, but they do match very well to simplified
single-compartment (point-neuron) models. Thus I would argue that
very simple point-neuron models can be appropriate and "realistic", if
they are embedded in a large network whose behavior is closely tied to
imaging results at that level (as in my own current work).
Of course, one could argue that large-scale imaging methods that only
measure firing rates or average membrane potentials (and indirectly at
that!) are simply irrelevant, precisely because they don't capture
enough of the behavior of individual neurons to constrain a
compartmental model. But that's an idea about what is important and
what is ok to ignore, just like my own opinion that neural behavior
only makes sense in the context of large populations.
Dr. James A. Bednar
Institute for Adaptive and Neural Computation
University of Edinburgh
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
Dr. James M. Bower Ph.D.
Professor of Computational Neuroscience
Research Imaging Center
University of Texas Health Science Center -
- San Antonio
8403 Floyd Curl Drive
San Antonio Texas 78284-6240
Main Number: 210- 567-8100
Fax: 210 567-8152
The contents of this email and any attachments to it may be privileged or
contain privileged and confidential information. This information is only
for the viewing or use of the intended recipient. If you have received this
e-mail in error or are not the intended recipient, you are hereby notified
that any disclosure, copying, distribution or use of, or the taking of any
action in reliance upon, any of the information contained in this e-mail, or
any of the attachments to this e-mail, is strictly prohibited and that this
e-mail and all of the attachments to this e-mail, if any, must be
immediately returned to the sender or destroyed and, in either case, this
e-mail and all attachments to this e-mail must be immediately deleted from
your computer without making any copies hereof and any and all hard copies
made must be destroyed. If you have received this e-mail in error, please
notify the sender by e-mail immediately.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Comp-neuro