Bayesian inference has become a popular framework for decision-making given its consistent and flexible handling of uncertainty. In this regime, however, the statistician is subject to several surprisingly strong assumptions, which are violated in almost all modern machine learning settings. This is in fact well-understood, and has led to a range of methods which aim to retain characteristics of Bayesian uncertainty quantification without the restrictive assumptions that underpin it. Collectively, this body of work is sometimes referred to as “generalised Bayes”. This name, however, does not capture the main appeal of these conceptual frameworks: by unapologetically endorsing posteriors that lie outside the confines of Bayesian epistemology, they are intrinsically post-Bayesian. This is not a minor difference in semantics, but a major shift in outlook.
This seminar series aims to shed light on the post-Bayesian community’s ongoing work, its successes, and the challenges that lie ahead once we dare to go beyond orthodox Bayesian procedures.
The seminar will run fortnightly from the end of January onwards. The first iteration of the series will be broken down into three ‘chapters’ consisting of between 4-6 talks in each chapter. Each chapter will focus on a different set of post-Bayesian ideas: generalised Bayes (led by Jeremias Knoblauch), predictive resampling-based ideas like Martingale posteriors (led by Edwin Fong), and PAC-Bayes (led by Pierre Alquier). To make this useful for the entire community, the talks in each chapter will seek to cover some key aspects of literature conducted under that chapter.
The seminars take place on the second and fourth Tuesday of each month1 at either 9am-10am GMT or 2pm-3pm GMT depending on speaker availability, to be announced closer to the date. You can keep up to date by subscribing to our Google calendar.
Zoom link
Join the Zoom meeting here.
Talks will last 45-50 minutes, with 10-15 minutes for discussion. We will record all talks, and upload them to our YouTube channel. Links to these recordings will appear in the schedule following the talk.
All the information related to the seminar series will be distributed through a mailing list. To join that mailing list, click this link.
Tell us what you want (what you really, really want)
To let us know what chapters you would like to see in the future, who you would like to see lead them, or who you would like to hear talk, submit a suggestion through this form and we’ll see what we can do!
This talk will serve two purposes. In the first half, I will explain why this seminar series exists, and how it is organised. In particular, I will give some of the reasons why research in statistics and machine learning has increasingly ventured beyond vanilla Bayesian procedures, and where this has led us this far, focusing particularly on generalised Bayes, PAC-Bayes, and resampling-based strategies. I will briefly characterise some of the most fruitful approaches in this area and relate them to the structure of this seminar series. In the second half of the talk, I will zoom in on what will be covered in the first 6 talks of this series — generalised Bayesian inference. I will cover the basics of these ideas, and explain some of the most important directions in the field. I will link these directions to the seminars that will be given in the subsequent weeks.
Post-Bayesian belief updates, such as generalized Bayes and Gibbs posteriors, can deliver very different belief updates to those obtained via classical Bayesian beliefs. To ensure that such belief updates are useful in practice, we must therefore understand the behavior of these beliefs from a statistical standpoint. Answering questions such as, how reliable are the inferences obtained from post-Bayesian beliefs, or how do posterior predictives based on these beliefs perform, are integral for the adoption of these methods into the larger toolkit of machine learning and statistics. In this talk, I give a broad overview of the theoretical landscape for generalized and Gibbs posteriors, including what questions have been answered and what questions remain. I also give examples of how these theoretical developments can be leveraged to answer interesting questions regarding the choice of learning rate for predictive accuracy, and the impact on inferences when loss functions must be estimated.
TBD.
TBD.
TBD.
TBD.
TBD.