• Subsampling in complex systems

        Understanding how sampling changes inference about systems properties

        Inferring global properties of a system from observations is a challenge, even if one can observe the whole system. The same task becomes even more challenging if one can sample only a small number of the units at a time (spatial subsampling). For example, when recording from a brain area the spiking activity of only a very small fraction of all neurons can be accessed with millisecond precision. To still infer global properties, it is necessary to extrapolate from this small sampled fraction to the full system.

        Spatial subsampling affects inferences not only in neuroscience but in many different systems: In disease outbreaks, typically a fraction of cases remains unreported, hindering a correct inference about the true disease impact. Likewise, in gene regulatory networks, a fraction of nodes typically remains unknown. Similarly, when evaluating the data of social networks, the datasets are often so large that because of computational constraints only a subset is analyzed. Obviously, subsampling does not affect our inferences about properties of a single observed unit, such as the firing rate of a neuron. However, inferences are often biased when assessing aggregated properties, such as distributions of node degrees, ensemble sizes, or events in a time window. This includes distributions of, for example, the number of diseased people in an outbreak, the size of an avalanche in critical systems, or the number of simultaneously active neurons.

        As the first step, I studied in a joint project with Viola Priesemann, MPI Göttingen how the avalanche size distribution changes under subsampling [1]. We derived analytically a novel subsampling scaling that is applicable to different classes of observables and demonstrate how to infer distributions of the full system. We apply subsampling scaling to avalanche models and to recordings from developing neural networks and demonstrated that it can be used to distinguish critical from subcritical systems.

        [1] Anna Levina and Viola Priesemann. Subsampling scaling, Nature Communications, (2017)

      • Critical avalanches and optimality

         

        The idea, that closeness to criticality might be important for computations was first proposed in the 1980s by scientists who studied how complexity and computations can arise in systems of simple interacting elements [1]. The modern formulation was put forward in the beginning of the 1990s in studies of computation at the “edge of chaos”(critical state at the transition between stable and chaotic dynamics) [2].

        It would be natural to imagine, that the brain as a system of relatively simple interconnected units would also profit from staying close to the critical state. However, first experimental support for this hypothesis was obtained only in 2003 in the experiments on cortical cultures [3]. In this study researchers measured activity propagation epochs on a multi-electrode array and observed a power-law statistics of such epochs, called in an analogy to the cellular automata, neuronal avalanches. Presently, neuronal avalanches were found in different experimental settings and systems including in vivo recordings from monkeys and even from humans.

        At the time of the first neuronal experiments, the existing models for SOC were unable to satisfy the plausibility requirements for neuronal systems. I started my research by developing a model for a SOC neuronal network with activity dependent synapses. I demonstrated that short-term synaptic depression increases the length of critical parameter interval and addition of enriches the dynamics of the network leading to an additional phase transition [4, 5].

        We previously showed in the model that it is technically possible to combine criticality with classical learning [6], but there is so far no optimality of learning associated with critical state.

        Now I plan to directly check, how criticality is related to optimality and learning in real systems.

         

         

        Literature:

        [1] S. Wolfram. Universality and complexity in cellular automata. Physica D: Nonlinear Phenomena, 10(1):1–35, 1984.

        [2] C. G. Langton. Computation at the edge of chaos: phase transitions and emergent computation. Physica D: Nonlinear Phenomena, 42(1):12–37, 1990.

        [3] J. Beggs and D. Plenz. Neuronal avalanches in neocortical circuits. J. Neurosci, 23:11167–11177, 2003.

        [4]  A. Levina, J. M. Herrmann, and T. Geisel. Dynamical synapses causing self-organized criticality in neural networks. Nature Physics, 3:857–860, 2007.

        [5]  A. Levina, J. M. Herrmann, and T. Geisel. Phase transitions towards crit- icality in a neural system with adaptive interactions. Phys. Rev. Lett., 102(11):118110, 2009.

        [6] M.Uhlig, A.Levina, T.Geisel,and J.M. Herrmann. Critical dynamics in associa- tive memory networks. Frontiers in comp. neurosci., 7, 2013.

         

         

         

         

      • Self-organization of neuronal networks

        Emergence of structures in balanced networks by synaptic plasticity

        Structural inhomogeneities in synaptic efficacies have a strong impact on population response dynamics of cortical networks and are believed to play an important role in their functioning. However, little is known about how such inhomogeneities could evolve by means of synaptic plasticity. Together with F. Effenberger and J. Jost I studied an adaptive model of a balanced neuronal network that combines two different types of plasticity, STDP and synaptic scaling. We showed that plasticity rules yield both long-tailed distributions of synaptic weights and of firing rates and, simultaneously, a highly connected subnetwork of driver neurons with strong synapses emerges.

        Now there are may open questions, starting from the scalability of our approach and the implementation of topographic initial network structure, to the possibility of guiding reorganization of the network to perform special tasks.

         

        Literature:

        Felix Effenberger, Jürgen Jost, and Anna Levina. “Self-organization in balanced state networks by STDP and homeostatic plasticity.” PLoS Comput Biol 11.9 (2015): e1004420.

      • Mathematical understanding of neuronal avalanches

        Building mathematics models to understand basics of criticality

        I have been fascinated by the mathematics behind neuronal avalanches and self-organized criticality for a long time. I started my research in this field translating the model of neuronal avalanches exhibiting critical and non-critical behavior very similar to the experimentally observed ones, to the strict mathematical language. To this end, I introduced an avalanche transformation and showed that it may be viewed as a random dynamical system, a measure preserving finitely generated group action, and a skew product transformation. I also studied the properties of the arising avalanche distribution, that I called Abelian distribution because of the Abelian sums leading to it.

        In further developing mathematical models of neuronal avalanches I collaborate with Fatemeh Mohammadi (University of Bristol) and Anirban Das (Penn State)

         

        Literature:

        [1] Denker, M., & Levina, A. (2016). Avalanche dynamics. Stochastics and Dynamics, 16(2): 1660005

        [2] Levina A, Herrmann JM (2014). The Abelian distribution. Stochastics and Dynamics. Sep;14(03):1450001.