1

Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations

Memory Safe Computations with XLA Compiler

Relaxing Equivariance Constraints with Non-stationary Continuous Filters

SnAKe: Bayesian Optimization with Pathwise Exploration

Data augmentation in Bayesian neural networks and the cold posterior effect

Learning invariant weights in neural networks

Assumptions about invariances or symmetries in data can significantly increase the predictive power of statistical models. Many commonly used machine learning models are constraint to respect certain symmetries, such as translation equivariance in …

Bayesian Neural Network Priors Revisited

Last Layer Marginal Likelihood for Invariance Learning

Correlated weights in infinite limits of deep convolutional neural networks

Infinite width limits of deep neural networks often have tractable forms. They have been used to analyse the behaviour of finite networks, as well as being useful methods in their own right. When investigating infinitely wide convolutional neural …

The promises and pitfalls of deep kernel learning

Deep kernel learning and related techniques promise to combine the representational power of neural networks with the reliable uncertainty estimates of Gaussian processes. One crucial aspect of these models is an expectation that, because they are …