Model recycling with Gaussian processes

Date:

June 23, 2022

Author:

Hrvoje Stojic



Abstract

Gaussian processes are well known for their modelling flexibility and robustness to overfitting, but also for their prohibitive computational cost in training and prediction. In practice, modern sparse approximations circumvent this problem via inducing inputs and variational inference. However, the utility of inducing points goes beyond the scalability problem. In this talk, I will present a new framework for transfer learning based on the idea of model recycling. Crucially, the use of inducing points and variational posteriors makes it possible to perform two intertwined tasks between pre-trained GP models: i) building meta-models from models and ii) updating models with new data, without revisiting any sample. The framework avoids undesired data centralisation, reduces computational cost and allows the transfer of uncertainty metrics after training. The method exploits the augmentation of high-dimensional integral operators based on the Kullback-Leibler divergence between stochastic processes. This introduces efficient lower bounds under all pre-trained sparse variational GPs, with different complexity and even likelihood model. In the talk, I will also show experimental results for two scenarios, one for building meta-models given a collection of already-fitted GPs and another for continual inference given streaming data.


Notes


  • References:

    • P. Moreno-Muñoz, A. Artés-Rodríguez and M. A. Álvarez. Modular Gaussian Processes for Transfer Learning, In Advances in Neural Information Processing Systems (NeurIPS), 2021

    • P. Moreno-Muñoz, A. Artés-Rodríguez and M. A. Álvarez. Continual Multi-task Gaussian Processes, In arXiv:1911.00002, 2019

  • Personal website can be found here.

Share on social media

Share on social media

Share on social media

Share on social media

Related Seminars

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

Jun 24, 2024

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

Jun 24, 2024

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

Jun 24, 2024

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

Jun 24, 2024

From data to confident decisions

Ilija Bogunovic - University College London

Jun 13, 2024

From data to confident decisions

Ilija Bogunovic - University College London

Jun 13, 2024

From data to confident decisions

Ilija Bogunovic - University College London

Jun 13, 2024

From data to confident decisions

Ilija Bogunovic - University College London

Jun 13, 2024

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

May 23, 2024

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

May 23, 2024

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

May 23, 2024

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

May 23, 2024

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

Mar 28, 2024

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

Mar 28, 2024

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

Mar 28, 2024

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

Mar 28, 2024