- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
What do the human brain, biological networks, developing organisms, the immune system, flocking birds, AI systems and quantum devices have in common? All these systems, while vastly different in their appearances, spatial and temporal scales, and functions, unfold their dynamics in high-dimensional state spaces. At the same time, external stimuli can condense these complex spaces onto a dramatically smaller subset of states that represent targeted responses to these inputs. Most intriguingly, this subspace may emerge from low-order, often linear interactions and superpositions, which appear to balance the potential of generating arbitrary solutions against processes that filter out apt solutions from all physically possible ones. Ultimately, this enables the emergence of new and meaningful system states seemingly out of nothing. Is this the basis of knowledge, cognition, and information processing in general? And if so, can we identify common principles and relevant differences by comparing such systems across different realms and disciplines? This conference will bring together experts from neuroscience, physics, molecular and developmental biology, simulation and machine learning, and quantum computing and sensing. It will explore fundamental commonalities across systems from these distinct fields, at the same time highlighting their peculiarities, and how material and temporal constraints may have shaped them.
Chair: Franziska Matthäus
A fundamental characteristic of living systems is sensing and integrating multi-dimensional sensory signals with memory in order to generate complex self-organized behaviors in continuously changing environments. Using computations on the level of signaling networks in single-cells, we have identified that cells utilize dynamical ghost states as a memory-generating mechanism in order to integrate information from time-varying signals, and verified experimentally that ghost states are an emergent feature of cell-surface receptor networks organized at criticality. I will discuss a development of theoretical framework for biological computation with ghost states, and explore to which extent we can expand the findings from signaling networks in single cells to computations performed by neuronal networks in general.
The development of multicellular organisms is a dynamic process in which cells divide, rearrange, and interpret molecular signals to adopt specific cell fates. This results in the emergence of gene expression patterns, that later on give raise to different body parts and organs. We still lack full understanding of how these patterns could emerge in precise and reproducible way during embryonic development. In this talk I will present theoretical and computational strategies to study pattern formation in the developing spinal cord.
Chair: Thomas Sokolowski
The way organismic agents come to know the world, and the way algorithms solve problems, are fundamentally different. The most sensible course of action for an organism does not simply follow from logical rules of inference. Before it can even use such rules, the organism must tackle the problem of relevance. It must turn ill-defined problems into well-defined ones, turn semantics into syntax. This ability to realize relevance is present in all organisms, from bacteria to humans. It lies at the root of organismic agency, which arises from the autopoietic organization of living beings. In this talk, I will argue that cognition is an evolutionary elaboration of such basic organismic agency, with the process of relevance realization at its heart. This process is beyond formalization: it is not amenable to algorithmic solutions. This implies that cognition is not computational in nature. Instead, I show how relevance is realized by self-manufacturing dynamics that span several levels of organization. To be alive means to generate one's own meaning. This ability is a fundamental aspect of life, and a key characteristic that sets living systems apart from non-living matter.
Molecular components of cells must communicate with each other through physical mechanisms that necessarily consume energy [1]; for example, ion channels communicate electrically, by modulating ionic currents which are sensed as resulting charge accumulation at the membrane by distant voltage gated channels. I will first argue in general that powering such communication must incur large costs to overcome thermal fluctuations, likely dominating the information processing energy budget of cells. I will then report on progress towards understanding the energetic requirements of running a specific sensory system – neurons in the pit organ of certain snakes which sense tiny changes in temperature. In this system, individual thermally sensitive ion channels are molecular thermometers, whose opening is triggered by a roughly 1K change in temperature. However, individual neurons can respond reliably to mK temperature changes, 1000 fold smaller. I will briefly explain our model for how this signal amplification and information integration works mechanistically [2], requiring individual channels to communicate with each other electrically. While some of the details of this system are specific to the task of high-precision thermometry, others are likely general, and we seek design principles for the scale of single-channel currents, the sensitivity of voltage-gated channels, and the density of typical voltage-gated channels in neurons.
[1] SJ Bryant, BB Machta. Physical constraints in intracellular signaling: the cost of sending a bit. PRL, 2023
[2] I Graf, B Machta. A bifurcation integrates information from many noisy ion channels. arXiv:2305.05647, 2023
Chair: Matthias Kaschube
Brain network simulations enable us to understand how different entities in the brain interact to generate function. Moreover, such computational model simulations provide means of understanding principles of cognition and causes of performance variability across individuals. Importantly, personalized brain network avatars hold the potential for a multitude of clinical applications to improve diagnostics, for in silico testing of interventions and for inferring disease mechanisms.
I will show using examples that many neural circuits and computational algorithms of the brain perform efficiently amid severe resource constraints. By extension, I will argue that the processes we call cognition and learning are only needed because of these limitations: circuits of the brain must adaptively infer minimal summaries, syntheses and approximations of the world. These represenations compress the information from the world that permits a bounded computational engine to predict the future and decide appropriate behavior. I will present evidence that such inference processes in humans satisfy a principle of parsimony, akin to Occam's Razor, where subjects favor simple but sufficient mental models for tasks.
Chair: Eckhard Elsen
Chair: Marcin Zagórski
Adaptation is a recurring theme in biology, offering vital survival mechanisms in dynamic environments through precise regulation of physiological variables. This talk dives into the intriguing concept of robust perfect adaptation (RPA), a phenomenon where a system maintains a specific variable at a setpoint despite persistent perturbations. The objective of this talk is to explore the fundamental problem of achieving maximal RPA, focusing on a designated output variable and its robustness to perturbations across almost all network parameters. I will elucidate how RPA imposes critical structural constraints on underlying networks, characterized by simple linear algebraic conditions. These conditions provide insights into the diverse ways biomolecular integral feedback mechanisms can be realized. Building on these insights, I will introduce a novel internal model principle (IMP) tailored for biomolecular networks, akin to celebrated IMP in control theory. Throughout the presentation, I will relate these theoretical developments to practical implementation of RPA-achieving controllers and their applications. Specifically, I will discuss the implementation of genetically engineered synthetic integral feedback controllers within living cells and showcase their tunability and adaptation properties. Furthermore, I will highlight the relevance of these genetic control systems in the context of cell-based therapies.
Most natural and engineered information-processing systems transmit information via signals that vary in time. Computing the information transmission rate or the information encoded in the temporal characteristics of these signals, requires the mutual information between the input and output signals as a function of time, i.e. between the input and output trajectories. Yet, this is notoriously difficult because of the high-dimensional nature of the trajectory space, and all existing techniques require approximations. We present an exact Monte Carlo technique called Path Weight Sampling (PWS) that, for the first time, makes it possible to compute the mutual information between input and output trajectories for any stochastic model. The principal idea is to evaluate the exact conditional probability of an individual output trajectory for a given input trajectory on the fly, and average this via Monte Carlo sampling in trajectory space to obtain the mutual information. Applying PWS to the bacterial chemotaxis system, consisting of 182 coupled chemical reactions, demonstrates not only that the scheme is highly efficient, but also that the number of receptor clusters is much smaller than hitherto believed while their size is much larger.
Chair: Roberto Covino
Deep Neural Networks (DNNs) have excelled in many fields, largely due to their proficiency in supervised learning tasks. However, the dependence on vast labeled data becomes a constraint when such data is scarce. Self-Supervised Learning (SSL), a promising approach, harnesses unlabeled data to derive meaningful representations. Yet, how SSL filters irrelevant information without explicit labels remains unclear. In this talk, we aim to unravel the enigma of SSL using the lens of Information Theory, with a spotlight on the Information Bottleneck principle. This principle, while providing a sound understanding of the balance between compressing and preserving relevant features in supervised learning, presents a puzzle when applied to SSL due to the absence of labels during training. We will delve into the concept of 'optimal representation' in SSL, its relationship with data augmentations, optimization methods, and downstream tasks, and how SSL training learns and achieves optimal representations. Our discussion unveils our pioneering discoveries, demonstrating how SSL training naturally leads to the creation of optimal, compact representations that correlate with semantic labels. Remarkably, SSL seems to orchestrate an alignment of learned representations with semantic classes across multiple hierarchical levels, an alignment that intensifies during training and grows more defined deeper into the network. Considering these insights and their implications for class set performance, we conclude our talk by applying our analysis to devise more robust SSL-based information algorithms. These enhancements in transfer learning could lead to more efficient learning systems, particularly in data-scarce environments. Joint work with Yann LeCun, Ido Ben Shaul, and Tomer Galanti.
In the machine learning community, structured representations have demonstrated themselves to be hugely beneficial for efficient learning from limited data and generalization far beyond the training set. Examples of such structured representations include the spatially organized feature maps of convolutional neural networks, and the group structured activations of other equivariant models. To date however, the integration of such structured representations with deep neural networks has been limited to explicitly geometric transformations (such as spatial translation or rotation), known a priori to model developers. In the real world, we know that natural intelligence is able to efficiently handle novel transformations and flexibly generalize in a manner reminiscent of these structured artificial models, but crucially with respect to a broader class of non-geometric transformations. In this talk, we investigate how naturally intelligent systems might accomplish this through what we denote ‘fluid representations’. Specifically, we introduce a notion of generalized equivariance based on local reference frames at each point in representation space, and show how this novel type of structure can be induced in artificial neural networks through inductive biases originating from fluid dynamics and inspired by observed traveling waves in the brain. We show empirically that such models indeed learn 'approximately equivariant’ representations, similar to their explicitly geometrically structured counterparts, but in a much more flexible manner, where structure is learned directly from the data itself. We show that this structure both improves artificial neural networks, and simultaneously helps to understand observations from neuroscience itself.
Chair: Horst Stöcker
Information theory guides the design of information processing systems. It permits the construction of optimal communication channels between systems to sense, process, and represent information. A key concept thereby is relative entropy, quantifying the amount of information lost in a transaction. Relative entropy has, however, no sense of relevance: a bit of information on an irrelevant question weights as much as one on a relevant one. To introduce weighting into communication and reasoning, relative attention entropy is introduced here, which can be derived axiomatically. Attention entropy can ensure the preferential communication of relevant information. Furthermore, new relative entropy based algorithms for the inference of large numbers of model parameters are presented, Metric Gaussian Variational Inference (MGVI) and geometric Variational Inference (geoVI). Their application to problems of field inference with millions to billion of unknown parameters is illustrated by recent astrophysical applications, ranging from gamma ray sky to black hole imaging. These algorithms also permit the combination of independent pre-trained neural networks into reasoning systems for questions, none of their constituents could answer alone.
I will describe recent advances at the interface of physics-inspired AI, advanced computing, and automated workflows, and how these novel and complementary approaches are pushing the frontiers of knowledge and elevating human insight across disciplines.
An accurate description of matter under different conditions is given by the so-called ‘Equation of State’ (EoS). Currently, the EoS of matter under extremely high temperatures and densities is poorly understood, and remains a major challenge in the field of nuclear astrophysics. Neutron stars (NSs) harbor such extreme conditions and therefore serve as celestial laboratories for constraining the dense matter EoS. The past decade has witnessed great progress in the research of these objects. The structure of a neutron star is governed by the underlying EoS and the theory of gravity. The equations describing this structure are the Tolman–Oppenheimer–Volkoff (TOV) equations which arise from General Relativity. These equations provide a one-to-one mapping from a given theoretical EoS model to the corresponding NS masses and radii. Therefore, observations of NS masses and radii are crucial to extract the underlying EoS. With the recent advancements in measurements of NS properties, it is now possible to expand our knowledge on the dense matter EoS, by solving the inverse-problem. We present a novel physics-based deep learning method that exploits the concept of Automatic Differentiation to reconstruct the NS EoS from mass-radius (M-R) observations. A primary neural network (NN) (EoS Network) is deployed to represent the EoS in a model-independent way. A second NN (TOV-Solver Network) is trained to solve the TOV equations efficiently. The EoS Network is then combined with the pre-trained TOV-Solver Network and a gradient-based approach is implemented to optimize the weights of the EoS Network, in an unsupervised manner. Thus, the designed pipeline is trained to optimize the EoS, so as to yield through TOV equations, an M-R curve that best fits the observations. Importance Sampling (a Bayesian perspective) is adopted to evaluate the uncertainty of the NN reconstruction. The devised scheme can be implemented in a number of fields that face challenges with inverse-problems. Related Articles: [1] JCAP 08 (2022) 071 (arXiv:2201.01756) [2] Phys.Rev.D 107 (2023) 8, 083028 (arXiv:2209.08883)
The ionosphere, a dynamic region of the earth’s upper atmosphere experiences a rapid fluctuations in electron density. These small perturbations can severely impact the trans- mission of radio signals through the ionosphere manifesting scintillation, signal delays, power grid failures, loss of lock in GPS receivers and other navigation issues. This work aims to investigate the climatology of ionospheric irregularities specifically, Rate of Total Electron Content Index (ROTI) in Nepal during solar cycle 24 to understand its spatio- temporal characteristics. For this study, dual frequency GPS data were accessed from UN- AVCO for two GPS stations in Nepal namely, CHLM (28.2072◦N, 85.3141◦E) and NPGJ (28.1172◦N, 81.5953◦E) from 2008 to 2018 to derive the ROTI values and assess their variations at different temporal scales. The analysis of the ROTI variations for CHLM and NPGJ along with sunspot numbers during the period of 2008 to 2018 revealed promi- nent peak for ROTI value ≥ 0.06 TECU/min on the year 2014 (solar maximum) implying that a period of high solar activity is attributed to the occurrence of ionospheric irregular- ities in Nepal. Furthermore, daily, monthly and seasonal variations in ROTI during solar minimum(2008) and solar maximum (2014) showed similar pattern for both the stations with utmost peak in ROTI value evident during april/march (spring equinox) and lower ROTI values during summer solstice(June-July) leading to the high possibility of iono- spheric perturbations during equinox periods. Equinoctial asymmetry was also distinct for both the stations during both phases of solar cycle 24. Similarly, we computed the correlation coefficient and R squared value between ROTI values of the two stations. It was found to be 0.69 and 0.47 respectively for both the phases of solar cycle 24. Keywords: Ionospheric Irregularities, Ionospheric scintillations, Equatorial Plasma Bubbles, Rate of Change of Total Electron Content (ROTI), Rayleigh Taylor instability.
Learning is a fundamental process in neuroscience, yet the intricate relationship between behavioural learning and the underlying mechanisms at the neuronal circuit level remains elusive. We aim to address this knowledge gap by investigating how fear conditioning influences neuronal activity and the associated neural network in the mouse auditory cortex. In this study, we examined signal and noise correlations in pairwise neuronal activity data collected during chron16
ic imaging experiments in the mouse auditory cortex. Under basal conditions, where explicit learning is absent, we observed that signal correlations, reflecting the similarity of neuronal responses to sensory stimuli, consistently preceded noise correlations, which serve as a measure of effective connectivity. In other words, tuning similarity predicted effective connectivity. However, following auditory cued fear conditioning, this relationship was altered. Specifically, the predictive power of signal correlations on noise correlations decreased. Furthermore, employing a minimal network model, we found that this decrease in predictive power could be attributed to a reduction in synaptic learning rates following fear conditioning. This finding suggests that fear conditioning may act to decelerate the ongoing process of statistical learning, where new information continually overwrites old information. Our results propose that in the absence of behavioural learning, inputs to sensory cortex constantly overwrite the network structure. Fear learning appears to slow down this process, potentially facilitating the transfer of information to long-term memory storage.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this ‘nature-nurture transform’ using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
Dynamical description of natural systems has generally focused on fixed points, with saddles and saddle-based phase-space objects such as heteroclinic channels/cycles being central concepts behind the emergence of quasi-stable long transients. Reliable and robust transient dynamics observed for real, inherently noisy systems is, however, not met by saddle-based dynamics, as demonstrated here. Generalizing the notion of ghost states, we provide a complementary framework that does not rely on (un)stable fixed points, but rather on slow directed flows on ghost manifolds from which ghost channels and ghost cycles are generated. Moreover, we show that these novel objects are an emergent property of a broad class of models, typically used for description of natural systems.
The delicate balance necessary for ensuring reliable segregation of cell lineages is an intriguing problem in developmental biology. For mammals, and specifically for the early mouse embryo, cell fate decisions have been extensively researched, but the underlying mechanisms remain poorly understood. Current theoretical approaches to this problem still primarily rely on deterministic modeling, although stochasticity is an essential feature of this biological process. As such, we are developing a multi-scale event-driven spatial-stochastic simulator for emerging-tissue development. In this particular study, we focus on the mouse blastocyst stage embryo. We construct an archetypical multicellular model system in order to understand how positional information is robustly achieved and preserved. To this end, we adapt well-known event-driven simulation schemes for incorporating suitable tissue-scale phenomena, and determine biophysically-feasible parameter regimes via a recent simulation-based inference framework: a novel machine-learning based approach for Bayesian inference exploiting artificial neural networks. We uncover a signaling mechanism for reliable patterning emergence and maintenance which is independent of system size. Importantly, our approach allows us to properly quantify the robustness of this patterning process under realistic noise constraints. Moreover, we elucidate the importance of auto- and paracrine signaling for proper cell fate specification. Perspectively, our efforts will lead to a versatile framework capable of performing realistic-yet-efficient simulations of intracellular biochemical dynamics and intercellular communication for a wide range of biological systems.
In ferret visual cortex, spontaneous activity prior to eye-opening is organized into large-scale, modular patterns in the absence of long-range horizontal projections. This correlated activity reveals endogenous networks that predict aspects of future orientation selectivity. Previous modelling works have shown that the long-range correlations observed in these networks can arise from purely locally connected neurons through multi-synaptic interactions. Here we seek to explicitly map the structure of cortical lateral interactions through localized perturbations in vivo.
We first constructed a non-linear recurrent neural network model of strongly coupled excitatory and inhibitory units with effective local heterogeneous Mexican hat connections, finding that perturbing a small region of inhibitory neurons exerts a spatially extended influence on the pattern of ongoing activity. Notably, the influence is modular, partially similar in structure to spontaneous activity, and the strength of influence depends on stimulation location.
To test these predictions, we virally expressed GCaMP6s in excitatory neurons and Chrimson-ST in inhibitory neurons in layer 2/3 of young ferret visual cortex, allowing us to optogenetically activate small regions (~500μm diameter) of inhibitory neurons and simultaneously record widefield calcium activity.
In line with model predictions, local optogenetic inhibitory perturbations propagate through the entire network and induce a reorganization of activity even in areas up to 2mm away from stimulation site. The degree of disruption to the structure of correlations in activity depends on the perturbation location, and can be predicted from stimulation site’s overlap with the leading spontaneous principal components (PCs). The variance in optogenetically-perturbed activity patterns only partially overlaps with spontaneous activity space, suggesting a different activity manifold with local disruption.
Our results are consistent with the presence of strongly coupled E and I networks in early cortex, and demonstrate that network behaviour is an emergent property with local activity exerting specific and global influences
Cortical activity patterns, such as those that arise in response to sensory input, depend on the recurrent interaction structure within the cortical network. In the early developing ferret visual cortex, cortical network activity is organized into modular, distributed patterns [Smith et al., 2018]. Computational models have shown that modular patterns can emerge from a lateral interaction motif of local excitation and lateral inhibition (LELI). The statistical features of modular activity patterns, especially their dimensionality and their long-range correlations can be explained by assuming a mild degree of heterogeneity in the LELI interactions. However, the precise structure of this heterogeneity within the cortical network and how it impacts distributed activity patterns is unclear at present. Here, we studied within a recurrent network model how well we can estimate this interaction structure, from samples of input to the network and the corresponding network output activity patterns. We modeled the heterogeneous LELI interaction introducing various shapes of heterogeneity to an isotropic and homogeneous LELI interaction. The strengths of these various heterogeneous components are controlled by parameters d. We used a target model, with predetermined heterogeneity, to perform in silico experiments and generate input/output patterns. We then explored to what degree we can estimate this heterogeneity using a trainable model by fitting its parameters d in order to reproduce the same input/output mapping of the target model. To test the limit of this approach, we varied the strength and structure of the heterogeneity of the target model and assess how the fits are affected when the strength of heterogeneity increases and when different types of heterogeneity are used. Our results indicate that at least for the in silico experiments, already a small number of input/output patterns allows approximating the local interaction structure in recurrent networks that produce modular activity. This work suggests the possibility of applying this framework to in vivo experimental data to estimate the structure of network heterogeneity within developing cortical networks.
Chair: Eckhard Elsen
Quantum computing promises unprecedented possibilities for important computing tasks such as quantum simulations in chemistry and materials science or optimization and machine learning. With this potential, quantum computing is increasingly attracting interest from industry and scientific communities that use high performance computing (HPC) for their applications. These pilot users are primarily interested in testing whether available quantum computers today or in the foreseeable future are suitable for simulating increasingly complex systems, analyzing large data sets using machine learning methods or performing the hardest optimization task.
Access to quantum computer emulators running on HPC systems and to state-of-the-art quantum computing systems are the prerequisite for testing, benchmarking, algorithm and use case design activities, and first serious applications in scientific and engineering challenges.
For practical quantum computing, HPC infrastructures shall integrate quantum computers and simulators (QCS) in addition to cloud access to stand-alone QCS. As longterm experience in conventional supercomputing demonstrate, the successful integration of QCS into HPC systems requires a focus on all three fundamental components of the HPC ecosystem: users and their applications, software, and hardware.
The “Jülich UNified Infrastructure for Quantum computing (JUNIQ)”, a QC user facility at the Jülich Supercomputing Centre, meets these QCS access and integration needs. In addition, within JUNIQ, user support and training in HPC and QC usage is provided, software tools, modelling concepts and algorithms are developed, and it plays an important role in the development of prototype applications.
A broad user community will need to invest time and effort in developing new kinds of algorithms and software for real-world applications that take full advantage of the QCS as accelerators that speed up existing classical algorithms and software. In addition, a QCS full software stack will have to be developed that takes into account the various kinds of QCS hardware that is implemented on a variety of qubit platforms.
CERN established its Quantum Technology Initiative in 2020 with the goal of assessing the impact on High-Energy Physics and understand how CERN could contribute to scientific and technology development. After three years a programme is now in place that foresees the establishment of "Centres of Competence" in areas such as quantum networks, computing and sensing. This talk explains the motivations behind the programme, the current activities and collaborations, and the specific plans and challenges in the area of Quantum Computing for particle physics theory and experiments for the next 3 to 5 years.
Chair: TBA
Biological neuronal networks exhibit hallmark features such as oscillatory dynamics, heterogeneity, modularity, and conduction delays. To investigate which of these features support computations or are epiphenomena, we study recurrent networks of damped harmonic oscillators and endow them with such biological features. Analyses of network dynamics uncovered a novel, powerful computational principle and provided plausible a posteriori explanations for features of natural networks whose computational role has remained elusive for a long time.
Sensory systems need to achieve a delicate balance between external and internal influences in order to accurately represent relevant information. Dynamic adjustments of the sensory code to these influences have been traditionally categorized depending on their origin and studied separately. Sensory adaptation is a response of a neuron to exogenous changes in stimulus statistics, while internal modulation adjusts sensory representations to changes in the endogenous states of the brain such as behavioral goals, attention or uncertainty. In this talk I will present a theoretical framework which provides a unifying perspective on how sensory codes adapt to such changes regardless of their origin. Starting from the same set of basic principles grounded in information theory and Bayesian inference, our framework generates candidate normative explanations of the diversity of adaptive responses in the early visual system as well as the attentional modulation of neural populations in the primary visual cortex. I will conclude by presenting an experimental finding of spatio-temporal patterns of neural activity which dominate sensory responses in a brain region that has been thought to be predominantly a sensory relay - the superficial superior colliculus. These findings emphasize the need for new theories which will be required to understand the computational principles of dynamic sensory processing.
Chair: Thomas Sokolowski