Research

In this page, I summarize some of the research topics that I’ve investigated during my career. You can also check the publications page for a full list of my published papers. Please feel free to contact me if you would like to discuss further some of these research topics.

Simulation-based inference

Inferring the laws and parameters that drive physical systems has been a long standing issue across all natural sciences. Whereas the methodology for building models from first principles may vary from one field to the other, estimating the parameters that best fit a given observation is an inverse problem for which general methods can be used. A common approach in practice is to perform parameter sweeps over a simulator until finding an output that is sufficiently similar to a given observation.

Although easily implemented and rather intuitive, such procedures are time consuming and are usually incapable of characterizing sets of parameters for which the simulator generates the same output. Alternatively, one can use Bayesian inference, a powerful framework that yields a posterior probability density function describing how likely each set of parameters is of having generated a given observed data. This posterior distribution can then be used in various tasks, such as assessing the variance of the parameter estimation or characterizing the indeterminacies of the model.

Unfortunately, Bayesian inference on modern complex simulator models is in general a difficult task because the likelihood function of the model’s output is often difficult or impossible to obtain. A modern approach for bypassing such obstacle is to use simulation-based inference (SBI) methods, which draws on several simulations of the model with different parameters to learn an approximation of the posterior distribution from examples. The first works on SBI are also known as approximate Bayesian computation (ABC) and have been applied to invert models from ecology, population genetics, and epidemiology. Recently, there has been a growing interest in the machine learning community in improving the limitations of ABC, which include the large number of simulations required for approximating the posterior distribution, the pre-definition of summary statistics to describe the observed data, and the need of defining a distance function to compare the results of two simulations. The figure above illustrates how the posterior distribution estimated via SBI might be used to infer the parameters generating a given observation $x_o$.

Some of the topics related to SBI that I've been working on recently are: - Validating posterior approximations in the SBI setting. - The effects of model misspecification in SBI and how to counter them. - Using the score diffusion framework in SBI.

Exploring invariances of multivariate time series

A multivariate time series represents measurements obtained from a set of sensors as a collection of vectors indexed by time. A common way for analysing such data is to estimate a set of parameters that describes its statistical behavior, such as its mean vector, its auto-covariance matrices, or its cross-spectral density matrices. Two multivariate time series may then be compared by defining a distance between the sets of parameters describing their statistics. A principled way for doing so is to study the intrinsic geometry of the space where the parameters are defined and use the geodesic distance between them as a measure of similarity. Such approach is based on concepts borrowed from Riemannian geometry (RG) and allows us to manipulate multivariate time series as points in a metric space. A convenient outcome of this approach is that it allows the development of new algorithms inspired by intuitive geometric arguments, as well as a new understanding of classical algorithms that were firstly developed in a purely analytical form and that can be reinterpreted under the RG framework.

I have used this geometric framework to explore invariances in multivariate time series. An invariance is “a property that remains unchanged regardless of changes in the conditions of measurements”. This is a very powerful property of a system, which reflects a notion of stability that is intrinsic to the dynamical system under study and that allows for a profound interpretation of its behavior. Invariances are at the core of many scientific fields, such as in classical mechanics, where the laws of motion are the same in all inertial frames (also known as Galilean invariance), and electromagnetism, where invariances and symmetries are commonly used to determine expressions describing electric and magnetic fields. The figure above illustrates these concepts with an example in astronomy.

In the context of multivariate time series, invariances may be related to different aspects of the phenomena that they represent. For instance, the statistical distribution of samples gathered from different experimental sessions are usually different, hindering their joint analysis with classical statistical methods. However, if the experiments portray the same phenomena, it is reasonable to assume that the samples of each session share invariant features. A concrete example is in EEG-based brain-computer interfaces (BCI), where the data from two subjects carrying out the same cognitive tasks, i.e., the same BCI paradigm, may have very different statistical distributions, even if latent information is clearly shared. Based on these observations, I have proposed an original algorithm that uses the RG framework to adapt the statistics of mismatched datasets and makes their joint analysis possible. This method is an extension of the classical Procrustes analysis from statistical shape analysis, which applies rigid transformations to data points (i.e., translation, stretching and rotation) from two datasets in order to match their statistical distributions. These transformations are carried out on points defined in a Riemannian manifold and is, therefore, called the Riemannian Procrustes analysis (RPA).