Seminar: Andrew G. Wilson - New York University

Date January 21, 2021
Author Hrvoje Stojic

How do we build models that learn and generalize?

Abstract

To answer scientific questions, and reason about data, we must build models and perform inference within those models. But how should we approach model construction and inference to make the most successful predictions? How do we represent uncertainty and prior knowledge? How flexible should our models be? Should we use a single model, or multiple different models? Should we follow a different procedure depending on how much data are available? How do we learn desirable constraints, such as rotation, translation, or reflection symmetries, when they don't improve standard training loss? In this talk I will present a philosophy for model construction, grounded in probability theory. I will exemplify this approach with methods that exploit loss surface geometry for scalable and practical Bayesian deep learning, and resolutions to seemingly mysterious generalization behaviour such as double descent. I will also consider prior specification, generalized Bayesian inference, and automatic symmetry learning.

Notes

Share
,,

Related articles

Seminar: Arno Solin - Aalto University

Labs
Seminars

Seminar: Arthur Gretton - University College London

Labs
Seminars

Seminar: Philipp Hennig - University of Tuebingen

Labs
Seminars
Optimization Engine
    Learn more
Solutions
Insights
Company
Research
©2024 Secondmind Ltd.