top of page

An exciting day of talks and discussions: December 16, 2025

This event will bring together theoretical and mathematical neuroscientists, as well as students from across the UK, to discuss current challenges and innovations in modeling brain activity in both health and disease. A full list of talks and abstracts is included below. 

​

Go back to main page.

Schedule

09:00 – 09:05
Opening Remarks

​

09:05 – 09:40

Claudia Clopath (Imperial)
Semantic extraction via systems consolidation

 

09:40 – 10:15
Dan Goodman (Imperial)
TBA

 

10:15 – 10:50
Thomas Nowotny (Sussex)
Harnessing the adjoint method for gradient descent in spiking neural networks

 

10:50 – 11:15
Break & Posters

 

11:15 – 11:50
Li Su (Sheffield & Cambridge)
Digital twin models that talk the talk and walk the walk

 

11:50 – 12:25
Sean Froudist-Walsh (Bristol)
Cortically-Embedded RNNs: anatomically-constrained cortex-wide models of cognition

 

12:25 – 13:00
Vassilis Cutsuridis (Plymouth)
Super memory retrieval in the hippocampus

 

13:00 – 14:00
Lunch & Posters

 

14:00 – 14:35
Karl Friston (UCL)
Active inference and artificial curiosity

 

14:35 – 15:10
Rick Adams (UCL)

Excitation-inhibition balance in schizophrenia: what it means and how        to measure it

 

15:10 – 15:45
Conor Houghton (Bristol)
TBA

 

15:45 – 16:10
Break & Posters

 

16:10 – 16:45
Peter Grindrod (Oxford)
The insights from neuroscience driving next generation neuromorphic AI and information processing

 

16:45 – 17:20
Enrico Amico (Birmingham)
Higher-order connectomics of human brain function

 

17:20 – 17:30
Poster Award & Closing Remarks

Titles  & Abstracts

 

Rick Adams

Excitation-inhibition balance in schizophrenia: what it means and how to measure it


 It has been hypothesised for several decades that a fundamental pathology in schizophrenia is an imbalance between excitation and inhibition in neural circuits. The details of this imbalance remain frustratingly obscure, however - not least, because it is hard to measure in vivo in humans. I will present work using biophysical (dynamic causal) models to try to infer E-I function from EEG data in early and established schizophrenia, and in different subgroups. I will also present work on neural recordings in mice that shows that established indirect measures of in vivo E-I balance are not accurate, and which proposes an alternative metric.

 

Enrico Amico

Higher-order connectomics of human brain function
 

 Traditional models of human brain activity often represent it as a network of pairwise interactions between brain regions. Going beyond this limitation, recent approaches have been proposed to infer higher-order interactions from temporal brain signals involving three or more regions. However, to this day it remains unclear whether methods based on inferred higher-order interactions outperform traditional pairwise ones for the analysis of fMRI data. In this talk I will introduce a novel approach to the study of interacting dynamics in brain connectomics, based on higher-order interaction models. Our method builds on recent advances in simplicial complexes and topological data analysis, with the overarching goal of exploring macro-scale and time-dependent higher-order processes in human brain networks. I will present our preliminary findings along these lines, and discuss limitations and potential future directions for the exciting field of higher-order brain connectomics.

 

Claudia Clopath

Semantic extraction via systems consolidation
 

 The theory of Complementary Learning Systems suggests the existence of two
memory systems (a fast learner and a slow learner) as a solution to the plasticity-stability
dilemma. In this work, we investigate a potential mechanism of semantic extraction via systems
consolidation using standard modelling approaches in Complementary Learning Systems.

 

Vassilis Cutsuridis

Super memory retrieval in the hippocampus
 

 Theoretical studies on memory capacity in artificial neural networks have shown the number of storable memories scale with the number of neurons and synapses in the network. As the memory capacity limit is reached, then stored memories interfere, and recall performance is reduced. A well-established neuromorphic microcircuit model was employed to systematicallyevaluate its recall performance as a function of stored patterns, interference, contextual information, network size, and engram cells when specific synaptic connections in the network were strengthened. The model consisted of multi-compartmental Hodgkin-Huxley-based excitatory (pyramidal) cells and two types of inhibitory neurons (bistratified cell, oriens lacunosum-moleculare(OLM) cell) firing at specific phases of a theta oscillation imposed by an external inhibitory signal targeting only the inhibitory cells in the network. Inhibitory cells inhibited specific compartments of the network’s excitatory cells. Two excitatory inputs (sensory and contextual inputs) targeted dendritic compartments of cells in the network and caused cells to fire. Simulation results showed that out of six model variants tested strengthening of excitatory synapses in proximal butnot basal dendrites of bistratified cells inhibiting pyramidal cells (model 1) made recall perfect. Strengthening of inhibitory synapses in pyramidal cells (model 2) made recall worst. Decreasing the number of engram cells coding for a memory pattern improved recall in a pathway-dependent way. However, increases in network size had a small effect on improving memory recall and so did increases in stored patterns. Interference between stored patterns had a detrimental effect on recall, which was reversible as the number of engram cells decreased.

 

Karl Friston

Active inference and artificial curiosity
 

This talk offers a formal account of insight and learning in terms of active (Bayesian) inference. It deals with the dual problem of inferring states of the world and learning its statistical structure. In contrast to current trends in machine learning (e.g., deep learning), we focus on how agents learn from a small number of ambiguous outcomes to form insight. The presentation uses simulations of abstract rule-learning and approximate Bayesian inference to show that minimizing (expected) free energy leads to active sampling of novel contingencies. This epistemic, curiosity-directed behaviour closes ‘explanatory gaps’ in knowledge about the causal structure of the world; thereby reducing ignorance, in addition to resolving uncertainty about states of the known world. We then move from inference to model selection (a.k.a., structure learning ) to show how abductive processes emerge when agents test plausible hypotheses about symmetries in their generative models of the world. The ensuing Bayesian model reduction evokes mechanisms associated with sleep and has all the hallmarks of ‘aha moments’.

 

Sean Froudist-Walsh

Cortically-Embedded RNNs: anatomically-constrained cortex-wide models of cognition

​​

Current state-of-the-art recurrent neural network models can capture complex neural dynamics during the performance of higher cognitive tasks. However, they largely overlook anatomy, limiting their ability to make species-specific and anatomically-precise predictions for experimentalists. Cortex-wide dynamical models increasingly integrate anatomical features including connectivity, dendritic spines and receptors, but are incapable of solving most cognitive tasks. Here, we introduce Cortically-Embedded Recurrent Neural Networks (CERNNs), which embed artificial neural networks into a species-specific cortical space, facilitating direct comparisons to empirical neuroscience data across the entire cortex and allowing the incorporation of biologically-inspired constraints. We trained CERNNs, with macaque or human anatomy, to perform multiple cognitive tasks (e.g. working memory, response inhibition). CERNNs were trained with different architectural constraints and biologically-inspired loss functions. We evaluated CERNNs on (1) task performance, (2) alignment of connectivity with the macaque mesoscopic connectome, and (3) task-evoked activity patterns. The best performing models penalized both long-distance connections and deviations from empirical spine density. These results suggest that distributed cognitive networks may arise naturally as the brain attempts to solve complex tasks under wiring constraints with systematic gradients of single neuron properties. More broadly, CERNNs constitute a framework by which artificial neural networks can be integrated with cortex-wide neuroanatomy, physiology and imaging data to  produce anatomically-specific testable hypotheses across species.
 

​​​

Peter Grindrod 

 The insights from neuroscience driving next generation neuromorphic AI and information processing
 

As we learn more about the cognitive inner workings of the brain's information processing and decision-making behaviours, this naturally leads us to consider alternative and additional ways to process information, from chips and architectures through to the generation of insights and decisions. In turn, this suggests options that might respond to challenges that are not addressable by the present state-of-the-art. By their nature though, they may develop some common features and idiosyncrasies that are associated with human cognition; such as, individual expertise and blind spots, illusions, systematic errors, and transient mind-sets, as well as evolutionary advantageous fast-thinking facilities, applicable within novel and data-poor circumstances.
We will discuss some of the lessons learned by the reverse engineering of very large scale neuron-to-neuron simulations (1B neurons) within cortex-like, network-of-networks, architectures. We identify some elements of the dynamical behaviour of the inner sub-networks (neural columns) that are not exhibited by present-day neuromorphic chips, owing to conceptual and design limitations. We describe a novel mathematical framework that might encompass human cognitive processing alongside various future neuromorphic processing concepts.
We will also identify certain elements of human cognition, reasoning, and performance which present-day chips and present-day AI simply cannot fully emulate (match to a high standard, in some artificial way), or simulate (achieve in the same way). We will discuss how these aspects might catalyse some new fields of development for both processors and AI methodologies. In short, we discuss how future research and development will respond in radical ways that are precluded by most present-day technologies.

 

Thomas Nowotny

Harnessing the adjoint method for gradient descent in spiking neural networks
 

The ability to calculate the gradient of a loss function with respect to the parameters of a spiking neural network is useful for both, using SNNs for machine learning and fitting SSNs to data as models of the brain. However, calculating gradients in SNNs has been beset with mathematical difficulties for a long time. Based on the adjoint method , Wunderlich and Pehle recently published the Eventprop algorithm for calculating gradients in SNNs with LIF neurons and Exponential synapses, which overcomes most of the difficulties.In this talk, I will give an introduction to this exciting area and discuss how the method can be generalised to a larger class of SNNs. We have implemented this more general method in our GeNN and mlGeNN framework  and I will show how this effectively provides an "auto-diff" functionality for SNNs. Finally, I will discuss some remaining challenges around encoding and decoding of information, regularisation and gradient flow. In this context, I will contrast the popular "discretise-then-optimise" approach with surrogate gradients in PyTorch or JAX (e.g. Norse, JAXLEY) with our "optimise-then-discretise" approach in (ml)GeNN.

 

Li Su

Digital twin models that talk the talk and walk the walk
 

Historically, theoretical neuroscience models often either simulate human behaviour but do not ground themselves in neurobiology or match to the brain’s physiology without the ability to perform human like tasks. In this talk, I will introduce our digital twin models, which simultaneously matched to brain imaging and behavioural data. Such models will be vital for computational psychiatry which aims to bridge the gap between molecular pathways in the brain and clinical symptomatology.

For any question or suggestion, please contact the Organisers:

​

Professor Stephen Coombes, stephen.coombes@nottingham.ac.uk

Dr. Dimitrios Pinotsis, pinotsis@city.ac.uk

Organisers

bottom of page