Seminar: Laurence Aitchison - University of Bristol

Date March 4, 2021
Author Hrvoje Stojic

Deep Kernel Processes


Neural networks have taught us that effective performance on difficult tasks requires deep models with flexible top-layer representations. However, inference over intermediate layer features in DGPs or weights in Bayesian NNs is very difficult, with current approaches being highly approximate. Instead, we note that DGPs can be written entirely in terms of positive semi-definite Gram matrices formed by taking the inner product of features with themselves, because the Gram matrices are Wishart distributed, and the next-layer kernel can often be written directly in terms of the Gram matrix. Inference over Gram matrices is much more tractable than inference over weights or features, with joint posterior even being unimodal. We define a tractable deep kernel process, the deep inverse Wishart process, and give a doubly-stochastic inducing-point variational inference scheme that operates on the Gram matrices, not on the features, as in DGPs. We show that the deep inverse Wishart process gives superior performance to DGPs and infinite BNNs on standard fully-connected baselines. Finally, we give motivation additional motivation for this approach, by considering the differences between finite ( and infinite ( neural networks.


  • The talk is primarily based on, and
  • Laurence Aitchison is a Senior Lecturer in Computer Science at the Computational Neuroscience Unit, University of Bristol. His website can be found here .

Related articles

Seminar: Arno Solin - Aalto University


Seminar: Arthur Gretton - University College London


Seminar: Carl Henrik Ek - University of Cambridge

    How we do it
©2021 Secondmind Ltd.