On information gain and regret bounds in Gaussian process bandits

Date April 14, 2021
Authors Vakili, Sattar, Khezeli, Kia, Picheny, Victor

Consider the sequential optimization of an expensive to evaluate and possibly non-convex objective function 𝑓 from noisy feedback, that can be considered as a continuum-armed bandit problem. Upper bounds on the regret performance of several learning algorithms (GP-UCB, GP-TS, and their variants) are known under both a Bayesian (when 𝑓 is a sample from a Gaussian process (GP)) and a frequentist (when 𝑓 lives in a reproducing kernel Hilbert space) setting.

The regret bounds often rely on the maximal information gain 𝛾𝑇 between 𝑇 observations and the underlying GP (surrogate) model. We provide general bounds on 𝛾𝑇 based on the decay rate of the eigenvalues of the GP kernel, whose specialisation for commonly used kernels improves the existing bounds on 𝛾𝑇, and subsequently the regret bounds relying on 𝛾𝑇 under numerous settings. For the MatΓ©rn family of kernels, where the lower bounds on 𝛾𝑇, and regret under the frequentist setting, are known, our results close a huge polynomial in 𝑇 gap between the upper and lower bounds (up to logarithmic in 𝑇 factors).

View the paper

Share
,,
Optimization Engine
    Learn more
Solutions
Insights
Company
Research
Β©2024 Secondmind Ltd.