Yorgos Deligiannidis

yorgos.jpg

University of Oxford

deligian@stats.ox.ac.uk

I’m a Professsor of Statistics in the Department of Statistics at the University of Oxford.

I work in the intersection of probability and statistics to analyse random processes and objects, especially those arising from algorithms used in computational statistics and machine learning. I have worked extensively on the theory and methodology of sampling methods, especially Markov Chain Monte Carlo. I have also worked on random walks on lattices and groups. At the moment I am particularly interested in the interplay between sampling, optimal transport and machine learning.


Recent talks
  • COLT 2025, Linear Convergence of Diffusion Models Under the Manifold Hypothesis, July 2025.
  • CRISM 2.0, Warwick, Theory for denoising diffusion models (Keynote), May 2025,
  • Cambridge, Statslab , Uniform Quantitative stability for Sinkhorn, April 2024
  • Warwick Stats Seminar, Uniform Quantitative stability for Sinkhorn, June 2023
  • Athens Probability Colloquium , March 2023

News

Sep 19, 2025 Three papers have been accepted at NeurIPS 2025:
  1. Rao-Blackwellised Reparameterisation Gradients
    Kevin Lam, Thang Bui, George Deligiannidis, and Yee Whye Teh
  2. Schrödinger Bridge Matching for Tree-Structured Costs and Entropic Wasserstein Barycentres
    Sam Howard, Peter Potaptchik, and George Deligiannidis
  3. Diffusion Models and the Manifold Hypothesis: Log-Domain Smoothing is Geometry Adaptive
    Tyler Farghly, Peter Potaptchik, Samuel Howard, George Deligiannidis, and Jakiw Pidstrigach

Jul 02, 2025 New preprint out with Alex Goyal and Nikolas Kantas (Imperial), on the convergence of Gibbs samplers beyond the log-concave setting. In particular we establish bounds on the conductance for the systematic-scan and random-scan Gibbs samplers when the target distribution satisfies a Poincare or log-Sobolev inequality and possesses sufficiently regular conditional distributions. You can find it here.
Jul 01, 2025 At COLT 2025 to present our work with Peter Potapchik and Iskander Azangulov on the Linear Convergence of Diffusion Models Under the Manifold Hypothesis.
Oct 17, 2024 Our preprint Linear Convergence of Diffusion Models Under the Manifold Hypothesis is out. In the context of ImageNet, our work suggests using ~100 steps (intrinsic dim) for effective sampling, compared to the existing state-of-the-art bounds, which recommend ~150k steps (ambient dim).
Sep 30, 2024 New preprint out with I. Azangulov and J. Rousseau, on the convergence of denoising diffusion models under the manifold hypothesis. The paper can be found here. We show that diffusion models achieve rates for score learning and sampling(in KL) independent of the ambient dimension, showing that they adapt to the underlying manifold structure.

selected publications

  1. On importance sampling and independent Metropolis-Hastings with an unbounded weight function
    George Deligiannidis, Pierre E Jacob, El Mahdi Khribch, and Guanyang Wang
    2024
  2. ICML
    Conditioning Diffusions Using Malliavin Calculus
    Jakiw Pidstrigach, Elizabeth Baker, Carles Domingo-Enrich, George Deligiannidis, and Nikolas Nüsken
    In ICML 2025, 2025
  3. Convergence of Diffusion Models Under the Manifold Hypothesis in High-Dimensions
    Iskander Azangulov, George Deligiannidis, and Judith Rousseau
    2024