Seminar: Peter Frazier - Cornell University & Uber

Date October 8, 2020
Author Hrvoje Stojic

Knowledge-Gradient Methods for Grey-Box Bayesian Optimization


The knowledge gradient (KG) is a class of Bayesian optimization acquisition functions that unlocks significant value and flexibility. While expected improvement is effective for standard Bayesian optimization, its guidance becomes questionable when we can measure quantities that don't directly create improvement but are obviously quite informative. Informative quantities undervalued by expected improvement include gradients, biased low-fidelity fast approximations of the objective function, and context variables that are unknown until measured. We provide an introduction to the KG acquisition function focusing on how it can be used in these more exotic Bayesian optimization settings. We then show how KG can be implemented easily to leverage GPU resources within BoTorch using the one-shot KG approach.



Related articles

Seminar: Andreas Krause - ETH Zurich

Seminar: Philipp Hennig - University of Tuebingen

Seminar: Alexandra Gessner - University of Tuebingen

Seminar: Ciara Pike-Burke - Imperial College London

Seminar: Carl Henrik Ek - University of Cambridge