Seminar: Laurence Aitchison - University of Bristol

Date March 4, 2021
Author Hrvoje Stojic

Deep Kernel Processes

Abstract

Neural networks have taught us that effective performance on difficult tasks requires deep models with flexible top-layer representations. However, inference over intermediate layer features in DGPs or weights in Bayesian NNs is very difficult, with current approaches being highly approximate. Instead, we note that DGPs can be written entirely in terms of positive semi-definite Gram matrices formed by taking the inner product of features with themselves, because the Gram matrices are Wishart distributed, and the next-layer kernel can often be written directly in terms of the Gram matrix. Inference over Gram matrices is much more tractable than inference over weights or features, with joint posterior even being unimodal. We define a tractable deep kernel process, the deep inverse Wishart process, and give a doubly-stochastic inducing-point variational inference scheme that operates on the Gram matrices, not on the features, as in DGPs. We show that the deep inverse Wishart process gives superior performance to DGPs and infinite BNNs on standard fully-connected baselines. Finally, we give motivation additional motivation for this approach, by considering the differences between finite (https://arxiv.org/abs/1910.08013) and infinite (https://arxiv.org/abs/1808.05587) neural networks.

Notes

  • The talk is primarily based on arxiv.org/abs/1910.08013, and arxiv.org/abs/1808.05587.
  • Laurence Aitchison is a Senior Lecturer in Computer Science at the Computational Neuroscience Unit, University of Bristol. His website can be found here .
Share
,,

Related articles

Seminar: Arno Solin - Aalto University

Labs
Seminars

Seminar: Arthur Gretton - University College London

Labs
Seminars

Seminar: Carl Henrik Ek - University of Cambridge

Labs
Seminars
Optimization Engine
    Learn more
Solutions
Insights
Company
Research
©2024 Secondmind Ltd.