# Luca Ambrogioni

## Assistant Professor - Donder's institute of Cognition

I am an assistant professor in AI. My areas of expertise are probabilistic machine learning and theoretical neuroscience. In my work I design probabilistic models of the human brain based on deep neural networks. I am also active in pure machine learning research, especially in the field of variational inference and optimal transport.

Seeliger, K., Ambrogioni, L., Güçlütürk, Y., Bulk, L., Güçlü, U., & Gerven, M. (2021). End-to-end neural system identification with neural information flow. *PLOS Computational Biology, 17*(2), e1008558

Abstract taken from Google Scholar:

Neural information flow (NIF) provides a novel approach for system identification in neuroscience. It models the neural computations in multiple brain regions and can be trained end-to-end via stochastic gradient descent from noninvasive data. NIF models represent neural information processing via a network of coupled tensors, each encoding the representation of the sensory input contained in a brain region. The elements of these tensors can be interpreted as cortical columns whose activity encodes the presence of a specific feature in a spatiotemporal location. Each tensor is coupled to the measured data specific to a brain region via low-rank observation models that can be decomposed into the spatial, temporal and feature receptive fields of a localized neuronal population. Both these observation models and the convolutional weights defining the information processing within regions are learned end-to-end by predicting the neural signal during sensory stimulation. We trained a NIF model on the activity of early visual areas using a large-scale fMRI dataset recorded in a single participant. We show that we can recover plausible visual representations and population receptive fields that are consistent with empirical findings.

Dijkstra, N., Ambrogioni, L., Vidaurre, D., & Gerven, M. (2020). Neural dynamics of perceptual inference and its reversal during imagery. *Elife, 9*

Abstract taken from Google Scholar:

After the presentation of a visual stimulus, neural processing cascades from low-level sensory areas to increasingly abstract representations in higher-level areas. It is often hypothesised that a reversal in neural processing underlies the generation of mental images as abstract representations are used to construct sensory representations in the absence of sensory input. According to predictive processing theories, such reversed processing also plays a central role in later stages of perception. Direct experimental evidence of reversals in neural information flow has been missing. Here, we used a combination of machine learning and magnetoencephalography to characterise neural dynamics in humans. We provide direct evidence for a reversal of the perceptual feed-forward cascade during imagery and show that, during perception, such reversals alternate with feed-forward processing in an 11 Hz oscillatory …

Ahmad, N., Gerven, M., & Ambrogioni, L. (2020). *Gait-prop: A biologically plausible learning rule derived from backpropagation of error. *

Abstract taken from Google Scholar:

Traditional backpropagation of error, though a highly successful algorithm for learning in artificial neural network models, includes features which are biologically implausible for learning in real neural circuits. An alternative called target propagation proposes to solve this implausibility by using a top-down model of neural activity to convert an error at the output of a neural network into layer-wise and plausible ‘targets’ for every unit. These targets can then be used to produce weight updates for network training. However, thus far, target propagation has been heuristically proposed without demonstrable equivalence to backpropagation. Here, we derive an exact correspondence between backpropagation and a modified form of target propagation (GAIT-prop) where the target is a small perturbation of the forward pass. Specifically, backpropagation and GAIT-prop give identical updates when synaptic weight matrices are orthogonal. In a series of simple computer vision experiments, we show near-identical performance between backpropagation and GAIT-prop with a soft orthogonality-inducing regularizer.

Ambrogioni, L., & Maris, E. (2019). Complex-valued Gaussian process regression for time series analysis. *Signal Processing, 160*, 215-228

Abstract taken from Google Scholar:

The construction of synthetic complex-valued signals from real-valued observations is an important part of many time series analysis techniques. The most widely used approach is based on the Hilbert transform, which maps the real-valued signal into its quadrature component. In this paper, we define a probabilistic generalization of this approach. We model the observable real-valued signal as the real part of a latent complex-valued Gaussian process. In order to obtain the appropriate statistical relationship between its real and imaginary parts, we define two new classes of complex-valued covariance functions. Through an analysis of stochastic oscillations, we show that the resulting Gaussian process complex-valued signal provides a better estimate of the instantaneous amplitude and frequency than the established approaches. Furthermore, the complex-valued Gaussian process regression allows to …

Seeliger, K., Güçlü, U., Ambrogioni, L., Güçlütürk, Y., & Gerven, M. (2018). Generative adversarial networks for reconstructing natural images from brain activity. *NeuroImage, 181*, 775-785

Abstract taken from Google Scholar:

We explore a method for reconstructing visual stimuli from brain activity. Using large databases of natural images we trained a deep convolutional generative adversarial network capable of generating gray scale photos, similar to stimuli presented during two functional magnetic resonance imaging experiments. Using a linear model we learned to predict the generative model's latent space from measured brain activity. The objective was to create an image similar to the presented stimulus image through the previously trained generator. Using this approach we were able to reconstruct structural and some semantic features of a proportion of the natural images sets. A behavioural test showed that subjects were capable of identifying a reconstruction of the original stimulus in 67.2% and 66.4% of the cases in a pairwise comparison for the two natural image datasets respectively. Our approach does not require end-to …

Ambrogioni, L., Güçlü, U., Güçlütürk, Y., Hinne, M., Maris, E., & Gerven, M. (2018). Wasserstein variational inference. *Neural Information Processing Systems 2018, *

Abstract taken from Google Scholar:

This paper introduces Wasserstein variational inference, a new form of approximate Bayesian inference based on optimal transport theory. Wasserstein variational inference uses a new family of divergences that includes both f-divergences and the Wasserstein distance as special cases. The gradients of the Wasserstein variational loss are obtained by backpropagating through the Sinkhorn iterations. This technique results in a very stable likelihood-free training method that can be used with implicit distributions and probabilistic programs. Using the Wasserstein variational inference framework, we introduce several new forms of autoencoders and test their robustness and performance against existing variational autoencoding techniques.

Tomassini, A., Ambrogioni, L., Medendorp, W., & Maris, E. (2017). Theta oscillations locked to intended actions rhythmically modulate perception. *Elife, 6*, e25618

Abstract taken from Google Scholar:

Ongoing brain oscillations are known to influence perception, and to be reset by exogenous stimulations. Voluntary action is also accompanied by prominent rhythmic activity, and recent behavioral evidence suggests that this might be coupled with perception. Here, we reveal the neurophysiological underpinnings of this sensorimotor coupling in humans. We link the trial-by-trial dynamics of EEG oscillatory activity during movement preparation to the corresponding dynamics in perception, for two unrelated visual and motor tasks. The phase of theta oscillations (~4 Hz) predicts perceptual performance, even >1 s before movement. Moreover, theta oscillations are phase-locked to the onset of the movement. Remarkably, the alignment of theta phase and its perceptual relevance unfold with similar non-monotonic profiles, suggesting their relatedness. The present work shows that perception and movement initiation are automatically synchronized since the early stages of motor planning through neuronal oscillatory activity in the theta range.DOI: http://dx.doi.org/10.7554/eLife.25618.001

Ambrogioni, L., Güçlü, U., Gerven, M., & Maris, E. (2017). The kernel mixture network: A nonparametric method for conditional density estimation of continuous random variables. *arXiv preprint arXiv:1705.07111, *

Abstract taken from Google Scholar:

This paper introduces the kernel mixture network, a new method for nonparametric estimation of conditional probability densities using neural networks. We model arbitrarily complex conditional densities as linear combinations of a family of kernel functions centered at a subset of training points. The weights are determined by the outer layer of a deep neural network, trained by minimizing the negative log likelihood. This generalizes the popular quantized softmax approach, which can be seen as a kernel mixture network with square and non-overlapping kernels. We test the performance of our method on two important applications, namely Bayesian filtering and generative modeling. In the Bayesian filtering example, we show that the method can be used to filter complex nonlinear and non-Gaussian signals defined on manifolds. The resulting kernel mixture network filter outperforms both the quantized softmax filter and the extended Kalman filter in terms of model likelihood. Finally, our experiments on generative models show that, given the same architecture, the kernel mixture network leads to higher test set likelihood, less overfitting and more diversified and realistic generated samples than the quantized softmax approach.

Ambrogioni, L., Hinne, M., Gerven, M., & Maris, E. (2017). GP CaKe: Effective brain connectivity with causal kernels. *Neural Information Processing Systems 2017, *

Abstract taken from Google Scholar:

A fundamental goal in network neuroscience is to understand how activity in one brain region drives activity elsewhere, a process referred to as effective connectivity. Here we propose to model this causal interaction using integro-differential equations and causal kernels that allow for a rich analysis of effective connectivity. The approach combines the tractability and flexibility of autoregressive modeling with the biophysical interpretability of dynamic causal modeling. The causal kernels are learned nonparametrically using Gaussian process regression, yielding an efficient framework for causal inference. We construct a novel class of causal covariance functions that enforce the desired properties of the causal kernels, an approach which we call GP CaKe. By construction, the model and its hyperparameters have biophysical meaning and are therefore easily interpretable. We demonstrate the efficacy of GP CaKe on a number of simulations and give an example of a realistic application on magnetoencephalography (MEG) data.

Hinne, M., Ambrogioni, L., Janssen, R., Heskes, T., & Gerven, M. (2014). Structurally-informed Bayesian functional connectivity analysis. *NeuroImage, 86*, 294-305

Abstract taken from Google Scholar:

Functional connectivity refers to covarying activity between spatially segregated brain regions and can be studied by measuring correlation between functional magnetic resonance imaging (fMRI) time series. These correlations can be caused either by direct communication via active axonal pathways or indirectly via the interaction with other regions. It is not possible to discriminate between these two kinds of functional interaction simply by considering the covariance matrix. However, the non-diagonal elements of its inverse, the precision matrix, can be naturally related to direct communication between brain areas and interpreted in terms of partial correlations. In this paper, we propose a Bayesian model for functional connectivity analysis which allows estimation of a posterior density over precision matrices, and, consequently, allows one to quantify the uncertainty about estimated partial correlations. In order to …