Probabilistic methods for increased robustness in machine learning

Date:

July 15, 2021

Author:

Hrvoje Stojic



Abstract

Most machine learning methods are brittle and their performance degrades catastrophically when the input data distribution changes. In this talk, I will describe two probabilistic approaches to address this problem. First, to obtain methods that degrade gracefully, I will focus on scaling accurate approximate Bayesian inference to large neural networks. I will show that it is enough to perform inference over a small subset of the model weights to obtain accurate predictive posteriors. The resulting method, called subnetwork inference, achieves very significant improvements when making predictions under distribution shift. Second, to avoid performance degradation in specific cases, I will describe invariant Causal Representation Learning (iCaRL), an approach that enables accurate out-of-distribution generalization when there is training data collected under different conditions (environments). iCaRL achieves generalization guarantees by assuming that the latent variables encoding the inputs follow a general exponential family distribution when conditioning on the outputs and the training environment. Experiments on both synthetic and real-world datasets show that iCaRL produces very significant improvements over existing baselines.


Notes

  • Dr. José Miguel Hernández Lobato is a University Lecturer in Machine Learning at the Department of Engineering in the University of Cambridge, UK. His website can be found here.

Share on social media

Share on social media

Share on social media

Related Seminars

Linear combinations of latents in generative models: subspaces and beyond

Erik Bodin - University of Cambridge

Mar 13, 2025

Linear combinations of latents in generative models: subspaces and beyond

Erik Bodin - University of Cambridge

Mar 13, 2025

Return of the latent space cowboys: rethinking the use of VAEs in Bayesian optimisation over structured spaces

Henry Moss - University of Cambridge, Lancaster University

Jan 21, 2025

Return of the latent space cowboys: rethinking the use of VAEs in Bayesian optimisation over structured spaces

Henry Moss - University of Cambridge, Lancaster University

Jan 21, 2025

Advancing sequential decision-making: efficient querying in clustering and best of both worlds for contextual bandits

Yuko Kuroki - CENTAI Institute

Oct 10, 2024

Advancing sequential decision-making: efficient querying in clustering and best of both worlds for contextual bandits

Yuko Kuroki - CENTAI Institute

Oct 10, 2024

AI in drug discovery - from model to process, from academic publication to decision-making

Andreas Bender - University of Cambridge

Sep 19, 2024

AI in drug discovery - from model to process, from academic publication to decision-making

Andreas Bender - University of Cambridge

Sep 19, 2024