Probabilistic methods for increased robustness in machine learning

Date:

July 15, 2021

Author:

Hrvoje Stojic



Abstract

Most machine learning methods are brittle and their performance degrades catastrophically when the input data distribution changes. In this talk, I will describe two probabilistic approaches to address this problem. First, to obtain methods that degrade gracefully, I will focus on scaling accurate approximate Bayesian inference to large neural networks. I will show that it is enough to perform inference over a small subset of the model weights to obtain accurate predictive posteriors. The resulting method, called subnetwork inference, achieves very significant improvements when making predictions under distribution shift. Second, to avoid performance degradation in specific cases, I will describe invariant Causal Representation Learning (iCaRL), an approach that enables accurate out-of-distribution generalization when there is training data collected under different conditions (environments). iCaRL achieves generalization guarantees by assuming that the latent variables encoding the inputs follow a general exponential family distribution when conditioning on the outputs and the training environment. Experiments on both synthetic and real-world datasets show that iCaRL produces very significant improvements over existing baselines.


Notes

  • Dr. José Miguel Hernández Lobato is a University Lecturer in Machine Learning at the Department of Engineering in the University of Cambridge, UK. His website can be found here.

Share on social media

Share on social media

Share on social media

Share on social media

Related Seminars

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

Jun 24, 2024

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

Jun 24, 2024

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

Jun 24, 2024

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

Jun 24, 2024

From data to confident decisions

Ilija Bogunovic - University College London

Jun 13, 2024

From data to confident decisions

Ilija Bogunovic - University College London

Jun 13, 2024

From data to confident decisions

Ilija Bogunovic - University College London

Jun 13, 2024

From data to confident decisions

Ilija Bogunovic - University College London

Jun 13, 2024

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

May 23, 2024

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

May 23, 2024

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

May 23, 2024

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

May 23, 2024

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

Mar 28, 2024

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

Mar 28, 2024

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

Mar 28, 2024

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

Mar 28, 2024