Unbiased active learning and testing

Date:

September 16, 2022

Author:

Hrvoje Stojic



Abstract

Active learning is a powerful tool when labelling data is expensive, but it introduces a bias because the training data no longer follows the population distribution. We can, in fact, remove this bias using corrective weights based on importance sampling. This has two main consequences: first, we show that the bias is actually useful for active learning, especially with overparameterized models like neural networks; second, this technique enables active testing---a new way of doing model evaluation with limited data.


Notes


Share on social media

Share on social media

Share on social media

Related Seminars

Linear combinations of latents in generative models: subspaces and beyond

Erik Bodin - University of Cambridge

Mar 13, 2025

Linear combinations of latents in generative models: subspaces and beyond

Erik Bodin - University of Cambridge

Mar 13, 2025

Return of the latent space cowboys: rethinking the use of VAEs in Bayesian optimisation over structured spaces

Henry Moss - University of Cambridge, Lancaster University

Jan 21, 2025

Return of the latent space cowboys: rethinking the use of VAEs in Bayesian optimisation over structured spaces

Henry Moss - University of Cambridge, Lancaster University

Jan 21, 2025

Advancing sequential decision-making: efficient querying in clustering and best of both worlds for contextual bandits

Yuko Kuroki - CENTAI Institute

Oct 10, 2024

Advancing sequential decision-making: efficient querying in clustering and best of both worlds for contextual bandits

Yuko Kuroki - CENTAI Institute

Oct 10, 2024

AI in drug discovery - from model to process, from academic publication to decision-making

Andreas Bender - University of Cambridge

Sep 19, 2024

AI in drug discovery - from model to process, from academic publication to decision-making

Andreas Bender - University of Cambridge

Sep 19, 2024