Chung-Chuan Lo

Chung-Chuan Lo
Institute of Bioinformatics and Structural Biology, National Tsing Hua University
Hsinchu City, Taiwan

Speaker of Workshop 1

Will talk about: A layered description language for spiking neural network modeling

Bio sketch:

Chung-Chuan Lo received his Ph.D. in Physics from Boston University in 2004. He joined Dr. Xiao-Jing Wang's lab at Brandeis University (2004-2006) and then at Yale University (2006-2008) to conduct postdoctoral research in computational neuroscience. In Dr. Wang's lab he proposed large-scale neural network models which suggested neurobiological mechanisms of decision making, inhibitory control and conflict resolution between automatic and voluntary movements. In 2008 he joined the Institute of Bioinformatics and Structural Biology in National Tsing Hua University, Taiwan as an assistant professor and transferred his position to the Institute of Systems Neuroscience in the same university next year. His lab currently focuses on 1) neural circuit mechanisms for dynamical modulation of perceptual decision making and executive functions, and 2) data-driven circuit models for the central nervous system of Drosophila.

Talk abstract:

The neural modeling has a long history in the development of modern neuroscience and has greatly enhanced our understanding in principles of neural activity and interactions between neurons. However, as a rapidly growing field, the neural network modeling has reached a level of complexity which makes the exchange of information between research groups extremely difficult. It also becomes more and more unlikely that the modeling results obtained in one lab be exactly reproduced in another lab which uses a different simulator. The problem arises from the fact that the field of computational neuroscience is lacking appropriate standards to communicate network models. To address this issue, the International Neuroinformatics Coordinating Facility (INCF) has initiated a project: Network Interchange for Neuroscience Modeling Language, or NineML, which provides a standardized machine-readable language for spiking neural network models with an aim to ease model sharing and to facilitate the replication of results across different simulators.

In the talk I will introduce the first version of NineML. Its most innovative features include:
1. Layered: The complete description of a neural network model in NineML is separated into to a user layer and an abstraction layer. The XML-based user layer provides a syntax to specify the instantiation and parameterization of a network model in biological terms. The abstraction layer provides explicitly descriptions of the core concepts, mathematics, model variables and state update rules.
2. Fully self-consistent: All model concepts defined in the user layer are expressed explicitly in the abstraction layer so that a neural network model can be unambiguously implemented by software that fully supports NineML.
3. Highly expandable: future expansions are taken into account in the development of NineML. Hence specific model features that are not supported in the current version of NineML can be easily added in a later version without any major revision to the specification of the language.

In the talk I will also demonstrate NineML using several example models of neural networks. I will show how the description looks like in different layers and how NineML solves some difficult problems. Using NineML, researchers can describe their neural network models in an unambiguous and simulator-independent way. Furthermore, the models can be reimplemented and simulation results can be easily reproduced by any simulator which fully supports NineML. We believe that this project will have a profound effect on the modeling community and will facilitate research in computational neuroscience.

Document Actions