Talks

Mathieu Lewin – Sur le transport optimal pour des systèmes de particules classiques et quantiques – Long talk

Dans cet exposé je rappellerai comment le transport optimal intervient en chimie et en physique quand on étudie des systèmes de particules en interaction. Je mentionnerai ensuite quelques résultats récents et questions ouvertes.

Hugo Malamut – Well-posedness and convergence of entropic approximation of semi-geostrophic equations – Short talk

The semi-geostrophic (SG) equations notably allow for modeling the evolution of wind fronts over large time and space scales. Optimal transport provides a simple and concise interpretation of these equations. It offers the opportunity to solve use Sinkhorn algorithm for numerical resolution. This methods corresponds to a PDE that approximates the SG system. I will present a result of well-posedness for this PDE and discuss the convergence of the scheme when both the regularization parameter and the discretization step vanish.
Based on joint works with J.-D. Benamou, C. Cotter (2023) and G. Carlier (2024)

Jean-Baptiste Courbot – Off-the-grid, in the clouds, and beyond – Short talk

In this talk we will cover some developments of the off-the-grid methodology for analyzing images based on sparse priors. Starting with an application on cloud tracking in remote sensing image, we will deepen the algorithm methodology introducing an homotopy framework, with applications in microscopy. Then, we will see how this framework can be tuned to the specificities of time-frequency analysis.

Gabriel Peyré – A Survey of Wasserstein Flow in Neural Network Training Analysis – Long talk

In this talk, I will first introduce the concept of Wasserstein gradient flow, an optimization process over the space of measures. This approach provides a unified framework for describing the gradient descent method applied to particle positions and can handle an arbitrary, possibly infinite, number of particles. Additionally, it enables the modeling of diffusion phenomena, which are not easily described by particle systems, and can be beneficial for sampling problems. A significant recent application of this method is in studying the convergence of gradient training in shallow neural networks, where particles represent neuron weights. I will conclude by discussing its application in deep learning, particularly in training ResNet architectures, where optimal transport is applied independently to each residual connection. This final part is based on joint work with Raphaël Barboni and François-Xavier Vialard.

Adrien Vacher – Convergence and lower bounds for Geometric Tempering – Short talk

In this talk, we establish convergence results for the geometric tempering scheme under standard functional inequalities. Even though our upper-bounds can slightly improve over Langevin’s in the strongly log-concave case, we establish lower-bounds showing that, just as Langevin, geometric tempering still fails to converge for multi-modal distributions.

Flavien Leger – Spaces of measures with nonnegative cross-curvature – Short talk

The MTW condition was introduced by Ma, Trudinger and Wang in their study of the regularity of the optimal transport problem with a general cost function. Nonnegative cross-curvature (NNCC) is a closely related condition later studied by Kim and McCann.
In this talk I will present a synthetic formulation of NNCC that is applicable to infinite-dimensional spaces of measures and non-differentiable costs. In that framework the Wasserstein space has NNCC, and more generally transportation costs induced by a ground cost with NNCC. Other interesting examples include unbalanced optimal transport, as well as the Gromov–Wasserstein, Bures–Wasserstein, Hellinger and Fisher–Rao squared distances.
I will then discuss applications to optimization problems on spaces of measures. On NNCC spaces it is possible to formulate tractable conditions to prove evolution variational inequalities (EVIs) for proximal schemes involving a movement limiter that is not a squared distance.

Irène Waldspurger – Second-order optimization for ill-conditioned low-rank problems – Short talk

When reconstructing a low-rank matrix from linear measurements, it is classical to write the matrix as the product of two “thin” matrices and optimize directly over the factors. This approach, called the Burer-Monteiro factorization, saves computational time and storage space, compared to optimizing over the full matrix.When the size of the factors is stricly larger than the rank of the underlying unknown matrix (the “overparametrized setting”), the factorization introduces ill-conditioning in the problem, making straightforward first-order methods slow. To overcome this issue, preconditioned gradient methods have been proposed. In this talk, we will discuss the performance of a second-order method, namely trust-regions.
This is joint work with Paul Caucheteux, Florentin Goyens and Clément Royer.

Romain Petit – Relèvement de problèmes inverses pour des équations aux dérivées partielles – Short talk

Dans cet exposé, je présenterai un travail en cours qui est une collaboration avec Giovanni S. Alberti et Simone Sanna (Università di Genova). Celui-ci concerne des problèmes inverses qui consistent à retrouver l’un des coefficients d’une équation aux dérivées partielles à partir d’informations sur sa solution. Le principal obstacle à leur résolution est leur non-linéarité : l’application liant l’inconnue aux observations est fortement non-linéaire. Je présenterai une idée d’approche qui pourrait permettre de résoudre ces problèmes inverses via la résolution d’un problème d’optimisation convexe. Celle-ci s’inspire fortement d’approches par relèvement (ou lifting) utilisées pour la résolution de problèmes inverses quadratiques, tels que les problèmes de reconstruction de phase.

Claire Boyer – A primer on physics-informed machine learning – Short talk

TBA

François-Xavier Vialard – On the global convergence of the Wasserstein gradient ow of Coulomb discrepancies – Long talk

In this talk, we study the gradient flow with respect to the Wasserstein metric of the Maximum Mean Discrepancy associated with the Coulomb kernel. In this context, we present several sufficient conditions for global convergence of the gradient flow to the unique global minimum. For instance, on closed Riemannian manifolds, we prove that the so-called Polyak-Lojasiewicz condition holds in some cases, resulting in an exponential convergence. To obtain this result, we use standard estimates from potential theory. An other result is the fact that there is no local minimum apart from the global one. This result is proven using flow interchange techniques.

Xavier Dupuis – Multidimensional screening – Short talk

We consider multidimensional screening problems (or principal-agent problems) with the joint taxation of savings and labor incomes as a motivation.
We provide a saddle-point reformulation on which we can apply Chambolle-Pock primal-dual algorithm or its nonconvex extension by Valkonen.
This is a joint work with Guillaume Carlier, Jean-Charles Rochet, and John Thanassoulis.

Guillaume Chazareix – Entropic Martingale Optimal Transport – Short talk

Martingale Optimal Transport has found extensive applications in various financial contexts, particularly in the calibration of stochastic processes. In finance, we are interested in the prices of financial products and derivatives, which are themselves financial products whose price depends on the price of one or more other financial products. Their price is uncertain, but the distribution of these prices is modeled by a diffusion process, such as a constant coefficient diffusion in the case of the famous Black-Scholes model. The calibration problem corresponds to finding the parameters of such a model based on measurable market data: the price of a financial product at an initial time and the price of derivatives maturing at later times. The selected derivatives are generally common products, such as options. The considered models may have local volatility, thus dependent on time and space. This volatility can then be used to calculate the price of more complex derivatives. Numerical solutions to this problem involve solving a variational problem based on nonlinear partial differential equations. However, these approaches are limited by the complexity of numerically solving these equations. In our work, we propose a discretization approach for the continuous problem and show that the solution of the discrete multi-marginal entropic optimal martingale transport problem thus obtained converges to the solution for the choice of a particular cost function continuous-time martingale optimal transport problem. This relaxation allows the use of algorithms similar to those employed in classical entropic optimal transport. Furthermore, we describe a method for implementing this algorithm on a GPU platform, thus improving the speed of calculations. We present numerical results for examples of multi-marginal martingale transport in an abstract context, as well as for examples in the particular case of the calibration of local stochastic volatility models.

Guillaume Carlier – Displacement smoothness of optimal entropic transport and applications – Long talk

In this talk,  I will discuss some stability properties of entropic potentials with respect to the marginals of the problem and give applications to some evolution equations or systems. Joint work with Lénaïc Chizat and Maxime Laborde. 

Quentin Mérigot – Quantization optimal pour Wasserstein et Sliced-Wasserstein – Short talk

Cet exposé porte problème de quantization (uniforme) optimale, qui consiste à minimiser une distance entre une mesure de probabilités uniforme sur N atomes (en fonction de la position des atomes) et une densité de probabilité rho. Je ferais un panorama de ce qui est connu, principalement dans le cas où la distance est Wasserstein (p=2), et je présenterai des résultats préliminaires et des questions ouvertes dans le cas sliced-Wassertein.

Maxime Sylvestre – Computing weak optimal transport – Short talk

The weak optimal transport introduced is an extension of optimal transport which takes the following form
inf{π∈Π(μ,ν)} ∫cx(πx)dμ
where π = μ ⊗ πx and cx is a function defined over the probability measures. This formulation includes the entropic optimal transport and has multiple applications such as the martingale optimal transport, vector quantile regression. Duality attainment results have been obtained in the non entropic case. We will show that dual attainment holds for costs of the form cx(p) = c0x(p) + 1{∫ f (y)dp(y)=0} + εH(p | ν) where c0x is a Lipschitz (uniformly in x) for TV norm convex function, f is a vector valued function and ε ≥ 0. Moreover we derive regularity (at least L∞) for the dual potentials. Which in turns grant quantitative stability result in the marginal and in ε by using a modified version of the block approximation. Finally the convergence of the numerical scheme is proven and applications such as the Brenier-Strassen interpolation are computed.

Thomas Gallouët – Pourquoi quitter Mokaplan pour ParMA et ou ira Luca Nenna après ? – Short talk

Dans cette exposé on expliquera la création de ParMA, ses sujets de recherche  et pourquoi cette équipe est évidement l’avenir de Mokaplan. On essayera ensuite de deviner ou Luca sera les prochaines années en discutant la notion d’extrapolation dans l’espace de Monge-Kantorovich.

Comments are closed.