Some thoughts about NeuroML and standardization

I’m pleased to say that we have released Neuronvisio 0.8.5, which has several small improvements and better documentation. I’m pretty sure that there is always room to improve the documentation.

Anyway, first of all the cerebellum network rendered in Neuronvisio, with the model taken from the NeuroML website

cerebellum_network

Cerebellum Network, taken from the NeuroML example pack

The idea with this release was to demonstrate the ability of Neuronvisio to visualize also Network models which are instantiated in NEURON. To note, this ability has been there from version 0.1.0, but now we are beaming on it.

Neuronvisio is able to import 3 different files: hdf5 structured in Neuronvisio format, hoc files, and NeuroML.

The first file is our way to approach hdf5, which gives us the ability to save all the data of a simulation, plus the model itself in one single file, which can then be reloaded and moved. We do all the work here.

The other two are files format where Neuronvisio does not import directly, but let’s NEURON do the heavy-lifting, to avoid any duplication. The hoc format is classic NEURON interpreted script, while NeuroML is an effort to standardize the way neuronal model are encoded.

In particular I would like to stress the last point: Neuronvisio does not have an ad-hoc NeuroML importer, but re-use the one provided by NEURON. We now just exposing an easy way to load a NeuroML file directly in Neuronvisio with the load method from the Controls class

from neuronvisio.controls import Controls
controls = Controls() # starting the GUI
controls.load('path/to/my_model.xml') #or .h5 or .hoc

or directly launching the program, if you prefer

$ neuronvisio [path/to/my_model.xml] # or .h5 or .hoc

this gives us one powerful interface to simplify the life of a user.

Of course, if your model is in python, you can always run it within Ipython as standard python file

$ neuronvisio
In [1]: run path/to/my_model_in_python

The only problem is the NeuroML importer from NEURON does not handle properly the Network file of NeuroML, and this has been registered as issue #50 on our bug-tracker. This belong more to NEURON then Neuronvisio, but they don’t have a bug-tracker, so we logged on ours tracker and we will link any other place where the discussion will take place. NEURON has an amazing forum with quite instantaneous answers from the NEURON community, including Hines and Carnevale, and we will bring the issue over there.

So, we didn’t write our NeuroML importer, because there is no point to replicate what a software already does. That’s why we are now collaborating with the writing of the libNeuroML library, to have one good library that permits to load any NeuroML model properly, and then give the ability to the developer to map it to its own data-structure.

This is the same approach used in the SBML community, which I think is very powerful.

P.S.: So how did we manage to load the Network in NEURON and visualize it in Neuronvisio, if the NEURON (sorry, NEURON has to be written all capital to be precise…) NeuroML importer is not up to the job yet? We have used the neuroConstruct program, which is able to export a model to NEURON, and used the hoc files to load the model up.

Tools for a computational scientist

So, how do you keep track of your work?

If you are in a wet lab, usually you end up using a lab book, where all the experiments are recorded. You can replicate the experiment, and do something new. It’s pretty cool system, although I think it’s not great for computational scientist. In computational science there is the same problem of recording what is going on, and what happened before. On top of that there is also the problem of sharing the program with other people to address reproducibility. Therefore the problem can be broken down to two different sub problems:

  • record the changes happening in a computational project, in particular to the code used to run the project
  • record the results of different execution and link them with a certain state of the code.
A classic approach is “do nothing”. The code sits on your hard drive somewhere. Maybe it is organized in folders, and descriptive file name. However there is not history attached, you have no idea what’s going on, and which is the latest version. As you guessed this is not a cool position, ’cause you spend time thinking how to track your work instead of doing your work, and you have the feeling that you don’t know what’s going on. This is bad. 
Fortunately, this can be solved :)
This is one of the problem which could be solved using a Version Control System, which are exactly invented to track changes in text files (and more).
I found very useful to work with Git, which is an amazing Distributed Version Control System (DVCS). The most important benefit that you get one using a version control system, and in particular git is that you have the ability to be more brave. This is because Git makes very easy to create a branch and test something new as you go on.
Branches

Branch in Git are quick and cheap! Easy to experiment!

Did you ever find yourself in a situation where you wanted to try something new, which could break a lot of different things in your repository, however you didn’t want to mess with your current code?
Well, Git gives you the ability to create a branch very cheaply, to test your new crazy idea and see if it works, in a completely isolated environment from the code that is sitting on your master branch. This means you can try new things, which tends to be quite important in science, because we don’t usually know where we are going, and try more than one solution opens up a lot of different possibilities.
The other good thing is you have a log, with whatever happened, and you can try to go back to the version that was working and restart from there. For example, this is the commits log from neuronvisio.
I’ve ran a hands-on crash course at the EBI about Git, (the repo). The course was very well-welcomed and people started to understand the power of using fast tools to free some mental space.
Another big plus for Git is the ability to host your project on github, which makes collaboration super-easy. These are the contributors for Neuronvisio for example.
Using a version controlled system is a good idea, and integrating it with Sumatra is also a very good idea. Sumatra automatically tracks all the parameters and versions of the programs used. I’ll talk about it in a later post, for now have a look to the slides:

Neuronvisio ModelDb plugged in released into the wild

We have just released Neuronvisio 0.7.0.

With this release it is possible to browse the models present on the ModelDb database, and have a look at the readme and at the properties of the Model.

Model Information and properties are presented in a quick way to the user

The Load Model button permits to download, extract, compile and load the model in one click. Sweet.

The other big things is that I didn’t write all this code, but actually 0.7.0 it’s the first release that features a contribution from another person (before was one man band!). Uri wrote the scraper for ModelDb and I’ve hooked it together in the GUI. We developed using the pull-request framework, which github makes very nice and clean.

If you’re interested in computational Neuroscience, and you are using NEURON, give Neuronvisio a go.

Impacts graph on Neuronvisio repo

Lately, github has rolled out a series of graphs to visualize the commit through the time.

An interesting one is the impact graph. This is when everything started

Neuronvisio_impact_start

Neuronvisio started as one man band project, actually as a spin-off of my PhD, when I realized that I was building something that was missing and that could be useful for other people as well. So I just detached the neuronvisio code in is own package, and released online. With time, Neuronvisio started to get some users, and people actually wrote enthusiastically on the ML about it. I was proud. Last August/September Uri decided to contribute to the software, to increase the features of it, in particular to plug it with ModelDB, making easy to browse the database, and to download and load a model directly with one button. I helped on the GUI part, while he took care of the ModelDB representation.

This is the graph of his impact on the software, at later stage.

I really enjoyed the Pull request method, and I have to say that github made the collaboration very easy and nice.It was good fun and I’m looking forward to other contributions.

The new features are not yet released (we’ll do in a bit of time), however if you can’t wait, you can grab the code from github master and give it a go!

 

 

 

Unity and accents

With the release of Unity and the (following) upgrade, I found myself with a problem that I thought to have solved long time ago: how to write accents. It turns out that the use of the compose key (Super/Meta … the one with the Windows logo) is used for the default shortcuts, which you need badly, to have a decent experience with unity..

The good thing is that I’ve discovered the layout UK – with WinKey which provides all the most classic accent like á é ó which can actually do the trick.

On another note I think Unity is pretty a good idea… But I have to use it a bit more to tell the true story..

Update: Actually I found out that on the UK -with WinKey the standard è is of the less used type (there are two: e acute (é) and e grave (è).

Now the last one is very important because is the third (singular) person of the verb to be (Today is a good day, Oggi è una bella giornata), therefore is used a lot.
I found a decent compromise using a US Layout with Italian letters:
Screenshot-Keyboard Layout "Italy US keyboard with Italian letters"

You get the accent with Alt-Gr.

HIH

Opensource philosophy to science?

Lately there has been a clear movement to move science towards a more open way to make research.

I’m not talking about Open Acess Publishing, which is still important, but to the real art and sweat to do science.

Science has always been very collaborative, however the dimension of this collaborative effort has always been restricted to a small group. This is not the case when general problems seems to be tackle.

For example, when the problem is the definition of a standard, like NeuroML or SBML, the development of it is a community driven project, where the community works as a whole to achieve a standard which is backed by the biggest number of people interested, so can be easily adopted.

The beneficial impact of standard is not the topic of this post, and for the sake of brevity I just want to point out that a well-coded model in a well-recognised standard gives the possibility to share the work of a modeller (in this instance) and make the model be re-used by other people.

On the same line OpenWetWare wanted to share the protocols used in the lab as soon they were established, and actually even before that as ‘Work in progress’.

The ability of a scientist to be a part of a community is not taken in account at all, due to the Publish or Perish system which is right now up and running. This model does not encourage collaboration, and actually create groups of people which are competing on the same topic to scoop each other. This is a broken system.

It’s so broken that some people even decide to leave academia, and that is only one of the cases. A lot of letters are also available in Nature and this article from the Economist got quite famous as well.

Therefore I watched with a lot of interest the new way proposed by Dall’Olio  which consist in collaborative editing of papers.

So far, if I didn’t miss any, at least two papers with this approach have been written, which is very interesting and shed a bright light for the future.

Still the number of places available in academia and the way the recruitment is organized uses the current model, which does not fit the market, and it’s prone to discard talented people very easily. There should be at least a live debate on how to fix this problem, and move science to a super collaborative discipline.

Happier scientists and better science sounds good to me.