In 2007, the CIN started with 25 principal investigators as cluster applicants, as stipulated in the DFG call for bids. When the CIN cluster was approved further scientists from a range of institutions were incorporated, to make up the 48 'founding members' of the CIN. Since the beginning of 2014 the CIN has consisted of over 80 scientists in total. The membership process involves an application to the steering committee in which the candidate outlines his or her scientific profile and submits a list of publications. The committee's decision is based purely on the scientific excellence of each candidate.
Dr. Jakob Macke
Organization: Max Planck Institute for Biological Cybernetics / Bernstein Centre for Computational Neuroscience
Phone number: +49 (0)7071 601-1721
Department: Neural Computation and Behaviour
Area: CIN Members
Scientific topic: Neural Computation and Behaviour
Field of Research
One of the fundamental questions in neuroscience is to understand how cell activity in the brain relates to external stimuli and observable behaviour. However, both neural activity and behaviour are not strictly determined by sensory input alone, but are strongly modulated by internal cognitive states and endogenously generated dynamics. Because of the dependence of cortical activity on internal states, an understanding of the interaction between sensory evoked responses and internal dynamics is essential for understanding information processing in the brain. The central goal of our research program is to gain a better understanding of how internal states shape externally generated activity and observed behaviour.
We tackle this problem by developing novel statistical methods for inferring internal states and internally generated dynamics from neural recordings, and collaborate with experimental laboratories on using these methods to study state-dependent information processing.
Importantly, our models combine prior knowledge about neural connectivity and response properties with the observed experimental data and thus lead to more realistic and biologically plausible descriptions of neural dynamics. Additionally, our methodology allows us to link measurements of single-neuron activity with aggregate measures of population activity such as local field potentials or electroencephalography. Lastly, our models yield quantitative predictions of how targeted stimulation affects neural dynamics and behaviour.
The methodology allows us to tackle two challenging open questions about state-dependent neural coding: first, we investigate how the feature selectivity of neurons in primary visual cortex depends on contextual influences from the local neural population in which they are embedded. Second, we relate the spiking activity of neural populations in motor cortex to simultaneously measured mesoscopic brain signals and observed motor actions, and investigate how much of their observed variability can be accounted for by endogenous dynamics.
Research Methodology: Bayesian Modelling and Machine Learning
Modern neuroscientific techniques make it possible to monitor neural population activity across a wide range of spatial and temporal scales, thereby allowing more realistic views of information processing in the brain. However, understanding the rich data available from multi-cell recordings is a challenging task that requires appropriate statistical tools: data-sets in neuroscience are often high-dimensional and exhibit rich and structured variability, complex statistical dependencies as well as nonstationarity across time. One of the major goals of our research is to develop statistical analysis techniques for modeling and analyzing the complex data generated by neuroscientific studies.
To this end, we use principles and techniques from Bayesian inference to formulate statistical models of neural data that reflect prior knowledge about the properties of neural circuits. Bayesian inference also provides powerful methods for avoiding over-fitting (i.e. learning too complex models), which are of particular importance in neuroscience as data sets are typically small. Finally, Bayesian inference provides methods for quantifying and visualizing how well the model is constrained by the data, and about which properties of the model there is uncertainty. We use optimization methods from the field of machine learning to learn the parameters of these models from data.
We aim to develop methods which have a thorough theoretical grounding, and which work robustly and efficiently on realistic data sets. Our research is at the interface between machine learning and computational neuroscience, and we enjoy interacting closely with researchers from both fields.
Macke J.H., Büsing L., Cunningham J.P., Yu B.M., Shenoy K.V. and Sahani M. (2011), Empirical models of spiking in neural populations. Advances in Neural Information Processing Systems (NIPS), 1691-1699.
Macke J.H., Opper M. and Bethge M. (2011), Common input explains higher-order correlations and entropy in a simple model of neural population activity. Physical Review Letters 106, 208102.
Macke J.H., Gerwinn S., White L., Kaschube M. and Bethge M. (2011), Gaussian process methods for estimating cortical maps. Neuroimage, 56(2):570-81.
Gerwinn S., Macke J.H., Seeger M. and Bethge M. (2007), Bayesian Inference for Spiking Neuron Models with a Sparsity Prior. Advances in Neural Processing Systems (NIPS), 529-536.
Macke J.H., Berens P., Ecker A.S., Tolias A.S. and Bethge M. (2009), Generating Spike Trains with Specified Correlation Coefficients. Neural Computation 21(2), 397-423.