Model recycling with Gaussian processes

Date:

June 23, 2022

Author:

Hrvoje Stojic



Abstract

Gaussian processes are well known for their modelling flexibility and robustness to overfitting, but also for their prohibitive computational cost in training and prediction. In practice, modern sparse approximations circumvent this problem via inducing inputs and variational inference. However, the utility of inducing points goes beyond the scalability problem. In this talk, I will present a new framework for transfer learning based on the idea of model recycling. Crucially, the use of inducing points and variational posteriors makes it possible to perform two intertwined tasks between pre-trained GP models: i) building meta-models from models and ii) updating models with new data, without revisiting any sample. The framework avoids undesired data centralisation, reduces computational cost and allows the transfer of uncertainty metrics after training. The method exploits the augmentation of high-dimensional integral operators based on the Kullback-Leibler divergence between stochastic processes. This introduces efficient lower bounds under all pre-trained sparse variational GPs, with different complexity and even likelihood model. In the talk, I will also show experimental results for two scenarios, one for building meta-models given a collection of already-fitted GPs and another for continual inference given streaming data.


Notes


  • References:

    • P. Moreno-Muñoz, A. Artés-Rodríguez and M. A. Álvarez. Modular Gaussian Processes for Transfer Learning, In Advances in Neural Information Processing Systems (NeurIPS), 2021

    • P. Moreno-Muñoz, A. Artés-Rodríguez and M. A. Álvarez. Continual Multi-task Gaussian Processes, In arXiv:1911.00002, 2019

  • Personal website can be found here.

Share on social media

Share on social media

Share on social media

Related Seminars

Linear combinations of latents in generative models: subspaces and beyond

Erik Bodin - University of Cambridge

Mar 13, 2025

Linear combinations of latents in generative models: subspaces and beyond

Erik Bodin - University of Cambridge

Mar 13, 2025

Return of the latent space cowboys: rethinking the use of VAEs in Bayesian optimisation over structured spaces

Henry Moss - University of Cambridge, Lancaster University

Jan 21, 2025

Return of the latent space cowboys: rethinking the use of VAEs in Bayesian optimisation over structured spaces

Henry Moss - University of Cambridge, Lancaster University

Jan 21, 2025

Advancing sequential decision-making: efficient querying in clustering and best of both worlds for contextual bandits

Yuko Kuroki - CENTAI Institute

Oct 10, 2024

Advancing sequential decision-making: efficient querying in clustering and best of both worlds for contextual bandits

Yuko Kuroki - CENTAI Institute

Oct 10, 2024

AI in drug discovery - from model to process, from academic publication to decision-making

Andreas Bender - University of Cambridge

Sep 19, 2024

AI in drug discovery - from model to process, from academic publication to decision-making

Andreas Bender - University of Cambridge

Sep 19, 2024