Post-Bayes seminar series

Mission statement

Bayesian inference has become a popular framework for decision-making given its consistent and flexible handling of uncertainty. In this regime, however, the statistician is subject to several surprisingly strong assumptions, which are violated in almost all modern machine learning settings. This is in fact well-understood, and has led to a range of methods which aim to retain characteristics of Bayesian uncertainty quantification without the restrictive assumptions that underpin it. Collectively, this body of work is sometimes referred to as “generalised Bayes”. This name, however, does not capture the main appeal of these conceptual frameworks: by unapologetically endorsing posteriors that lie outside the confines of Bayesian epistemology, they are intrinsically post-Bayesian. This is not a minor difference in semantics, but a major shift in outlook.

This seminar series aims to shed light on the post-Bayesian community’s ongoing work, its successes, and the challenges that lie ahead once we dare to go beyond orthodox Bayesian procedures.

Workshop @ UCL on post-Bayesian methods 15-16th May 2025!

Registration is open, and now accepting submissions! Key dates and further information available here.

Structure

The seminar will run fortnightly from the end of January onwards. The first iteration of the series will be broken down into three ‘chapters’ consisting of between 4-6 talks in each chapter. Each chapter will focus on a different set of post-Bayesian ideas: generalised Bayes (led by Jeremias Knoblauch), predictive resampling-based ideas like Martingale posteriors (led by Edwin Fong), and PAC-Bayes (led by Pierre Alquier). To make this useful for the entire community, the talks in each chapter will seek to cover some key aspects of literature conducted under that chapter.

The seminars take place on the second and fourth Tuesday of each month1 at either 9am-10am GMT or 2pm-3pm GMT depending on speaker availability, to be announced closer to the date. You can keep up to date by subscribing to our Google calendar.

Zoom link

Join the Zoom meeting here.

Talks will last 45-50 minutes, with 10-15 minutes for discussion. We will record all talks, and upload them to our YouTube channel. Links to these recordings will appear in the schedule following the talk.

All the information related to the seminar series will be distributed through a mailing list. To join that mailing list, click this link.

Schedule

Tell us what you want (what you really, really want)

To let us know what chapters you would like to see in the future, who you would like to see lead them, or who you would like to hear talk, submit a suggestion through this form and we’ll see what we can do!

Chapter 1: generalised Bayes

Introduction and overview: Jeremias Knoblauch (14:00 GMT @ 11.02.2025)

This talk will serve two purposes. In the first half, I will explain why this seminar series exists, and how it is organised. In particular, I will give some of the reasons why research in statistics and machine learning has increasingly ventured beyond vanilla Bayesian procedures, and where this has led us this far, focusing particularly on generalised Bayes, PAC-Bayes, and resampling-based strategies. I will briefly characterise some of the most fruitful approaches in this area and relate them to the structure of this seminar series. In the second half of the talk, I will zoom in on what will be covered in the first 6 talks of this series — generalised Bayesian inference. I will cover the basics of these ideas, and explain some of the most important directions in the field. I will link these directions to the seminars that will be given in the subsequent weeks.

Slides YouTube

Theoretical foundations: David Frazier (09:00 GMT @ 25.02.2025)

Post-Bayesian belief updates, such as generalized Bayes and Gibbs posteriors, can deliver very different belief updates to those obtained via classical Bayesian beliefs. To ensure that such belief updates are useful in practice, we must therefore understand the behavior of these beliefs from a statistical standpoint. Answering questions such as, how reliable are the inferences obtained from post-Bayesian beliefs, or how do posterior predictives based on these beliefs perform, are integral for the adoption of these methods into the larger toolkit of machine learning and statistics. In this talk, I give a broad overview of the theoretical landscape for generalized and Gibbs posteriors, including what questions have been answered and what questions remain. I also give examples of how these theoretical developments can be leveraged to answer interesting questions regarding the choice of learning rate for predictive accuracy, and the impact on inferences when loss functions must be estimated.

Slides YouTube

Learning rate selection and the power posterior: Ryan Martin (14:00 GMT @ 11.03.2025)

Bayesian inference generally works well when the model is well-specified. But model mis- or under-specification is the norm rather than the exception, so there’s good reason to consider other posterior constructions, which is precisely the motivation for this seminar series. In this installment, I’ll focus on Gibbs posteriors and, more specifically, on aspects pertaining to Gibbs posteriors’ so-called learning rate parameter. This is a challenging problem in various respects—philosophically, theoretically, and computationally—and I aim to say a bit about all of these aspects in my presentation. Time permitting, I’ll also talk briefly about situations beyond Gibbs posterior inference where a learning rate choice is involved.

Slides YouTube

Prediction-centric approaches: Chris Oates (09:00 GMT @ 25.03.2025)

Generalised Bayesian methodologies have been proposed for inference with misspecified models, but these are typically associated with vanishing parameter uncertainty as more data are observed. In the deterministic modelling context, this can have the undesirable consequence that predictions become certain, while being incorrect. Taking this observation as a starting point, we will critically review some prediction-centric alternatives to generalised Bayes.

Slides YouTube

Coarsened Bayes and applications to biomedical sciences: David Dunson (15:15 GMT @ 09.04.2025)

The standard approach to Bayesian inference is based on the assumption that the distribution of the data belongs to the chosen model class. However, even a small violation of this assumption can have a large impact on the outcome of a Bayesian procedure. We introduce a simple, coherent approach to Bayesian inference that improves robustness to perturbations from the model: rather than condition on the data exactly, one conditions on a neighborhood of the empirical distribution. When using neighborhoods based on relative entropy estimates, the resulting “coarsened” posterior can be approximated by simply tempering the likelihood—that is, by raising it to a fractional power. Thus, inference is often easily implemented with standard methods, and one can even obtain analytical solutions when using conjugate priors. Some theoretical properties are derived, and we illustrate the approach with real and simulated data, using mixture models, autoregressive models of unknown order, and variable selection in linear regression.

Slides YouTube

From generalised Bayes to Martingale posteriors: Chris Holmes (13:00 GMT @ 22.04.2025)

We review the historical motivation in the development of general Bayesian updating, leading to Martingale Posteriors, and the central role played by the Bayesian Bootstrap. The talk will focus on fundamental concepts in uncertainty quantification rather than mathematical results, and the notion of targeted learning for estimands. 

Chapter 2: Martingale posteriors

TBD.

Chapter 3: PAC-Bayes

TBD.

Organisers


  1. With some exceptions which will be announced well ahead of schedule and through the mailing list.↩︎