Knowledge gradient methods for Bayesian optimization

日付:

2022年10月8日

著者:

Hrvoje Stojic



Abstract

The knowledge gradient (KG) is a class of Bayesian optimization acquisition functions that unlocks significant value and flexibility. While expected improvement is effective for standard Bayesian optimization, its guidance becomes questionable when we can measure quantities that don't directly create improvement but are obviously quite informative. Informative quantities undervalued by expected improvement include gradients, biased low-fidelity fast approximations of the objective function, and context variables that are unknown until measured. We provide an introduction to the KG acquisition function focusing on how it can be used in these more exotic Bayesian optimization settings. We then show how KG can be implemented easily to leverage GPU resources within BoTorch using the one-shot KG approach.


Notes


ソーシャルメディアで共有

ソーシャルメディアで共有

ソーシャルメディアで共有

ソーシャルメディアで共有

関連するセミナー

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

2024/06/24

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

2024/06/24

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

2024/06/24

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

2024/06/24

From data to confident decisions

Ilija Bogunovic - University College London

2024/06/13

From data to confident decisions

Ilija Bogunovic - University College London

2024/06/13

From data to confident decisions

Ilija Bogunovic - University College London

2024/06/13

From data to confident decisions

Ilija Bogunovic - University College London

2024/06/13

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

2024/05/23

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

2024/05/23

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

2024/05/23

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

2024/05/23

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

2024/03/28

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

2024/03/28

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

2024/03/28

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

2024/03/28