Yorgos Deligiannidis

University of Oxford
deligian@stats.ox.ac.uk
I’m a Professsor of Statistics in the Department of Statistics at the University of Oxford.
I work in the intersection of probability and statistics to analyse random processes and objects, especially those arising from algorithms used in computational statistics and machine learning. I have worked extensively on the theory and methodology of sampling methods, especially Markov Chain Monte Carlo. I have also worked on random walks on lattices and groups. At the moment I am particularly interested in the interplay between sampling, optimal transport and machine learning.
Recent talks
- COLT 2025, Linear Convergence of Diffusion Models Under the Manifold Hypothesis, July 2025.
- CRISM 2.0, Warwick, Theory for denoising diffusion models (Keynote), May 2025,
- Cambridge, Statslab , Uniform Quantitative stability for Sinkhorn, April 2024
- Warwick Stats Seminar, Uniform Quantitative stability for Sinkhorn, June 2023
- Athens Probability Colloquium , March 2023
News
Sep 19, 2025 | Three papers have been accepted at NeurIPS 2025:
|
---|---|
Jul 02, 2025 | New preprint out with Alex Goyal and Nikolas Kantas (Imperial), on the convergence of Gibbs samplers beyond the log-concave setting. In particular we establish bounds on the conductance for the systematic-scan and random-scan Gibbs samplers when the target distribution satisfies a Poincare or log-Sobolev inequality and possesses sufficiently regular conditional distributions. You can find it here. |
Jul 01, 2025 | At COLT 2025 to present our work with Peter Potapchik and Iskander Azangulov on the Linear Convergence of Diffusion Models Under the Manifold Hypothesis. |
Oct 17, 2024 | Our preprint Linear Convergence of Diffusion Models Under the Manifold Hypothesis is out. In the context of ImageNet, our work suggests using ~100 steps (intrinsic dim) for effective sampling, compared to the existing state-of-the-art bounds, which recommend ~150k steps (ambient dim). |
Sep 30, 2024 | New preprint out with I. Azangulov and J. Rousseau, on the convergence of denoising diffusion models under the manifold hypothesis. The paper can be found here. We show that diffusion models achieve rates for score learning and sampling(in KL) independent of the ambient dimension, showing that they adapt to the underlying manifold structure. |