In January 2020 I joined the Department of Computing at Imperial College London as a lecturer (assistant professor). My research works towards developing systems which learn to accomplish tasks through experience from interacting with the environment, with as little experience as possible. I aim to let the underlying principles of inference and learning guide my work, both from the point of view of developing practical methods from first principles, and from finding underlying principles in existing methods.

I am particularly interested in **reinforcement learning** methods which use explicit **predictive models** of the world to plan behaviour. This approach improves **data efficiency**, as knowledge about the world generalises strongly to new situations. Learning good models of the world, with a reliable estimate of their own **uncertainty**, is crucial to the success of these methods.

As such, a major component of my research is building better predictive models. In reinforcement learning / decision making applications, we require **a)** uncertainty estimates, for avoiding or taking calculated risks, and **b)** automatic adaptation with increasing data, as more experience is gained. **Bayesian inference** provides an elegant framework for representing uncertainty, and automating many aspects of the modelling process.
Currently, I am interested in bringing the benefits of Bayesian inference to deep learning models, using **Gaussian processes** as a building block.

My work has been presented at the leading machine learning conferences (NeurIPS and ICML), including an oral presentation and a best paper award. Personally, I’m currently enthusiastic about our paper on learning what invariance should be used as an inductive bias for a particular dataset.

I will have spaces for a small number (~2) of PhD students a year in the next few years. I am looking for people with a strong academic background (particularly strong mathematical skills) who are keen to work on topics that are aligned with my interests (see below).

A strong mathematical background is usually demonstrated by a first-class (or equivalent) degree in information or electrical engineering, physics, maths, or computer science. A background in e.g. linear algebra, probability, statistics, and optimisation are particularly important. You can demonstrate alignment with my research interests with a short research statement that outlines **1)** what problem you are interested in, **2)** why this problem is interesting or important, and **3)** what techniques you think will be useful or necessary for reaching your goals.

- Bayesian inference and approximations to it (variational inference, EP, MCMC, …).
- Gaussian process models (deep GPs, GPSSM, GPLVM, …) or their theoretical properties.
- Bayesian deep learning (inference over weights, using GPs as building blocks, …).
- Neural networks / other models with invariance properties (e.g. rotation, scale, or more arbitrary) and learning invariances.
- Analysis of deep neural networks (infinite limits and GP relations).
- Model-based reinforcement learning.
- Differentially private machine learning.
- Connections between Bayesian inference and generalisation error bounds.

Before starting at Imperial, I worked with James Hensman for two years as a machine learning researcher at PROWLER.io, a research-led startup aiming to solve a wide variety of decision making problems. I did my PhD in the Machine Learning Group at the University of Cambridge, working with Carl Rasmussen, and completing my thesis in 2017. I was funded by the EPSRC and awarded a Qualcomm Innovation Fellowship for my final year. During my PhD, I occasionally worked as a machine learning consultant, and I also spent a few months as a visiting researcher at Google in Mountain View, CA. I moved to the UK from the Netherlands for my undergraduate degree in Engineering at Jesus College, University of Cambridge.

Translation Insensitivity for Deep Convolutional Gaussian Processes

Vincent Dutordoir, Mark van der Wilk, Artem Artemev, Marcin Tomczak, James Hensman

Bayesian Layers: A Module for Neural Network Uncertainty

Dustin Tran, Mike Dusenberry, Mark van der Wilk, Danijar Hafner

Scalable Bayesian dynamic covariance modeling with variational Wishart and inverse Wishart processes

Creighton Heaukulani, Mark van der Wilk

Variational Gaussian Process Models without Matrix Inverses

Mark van der Wilk, ST John, Artem Artemev, James Hensman

Overcoming Mean-Field Approximations in Recurrent Gaussian Process Models

Alessandro Davide Ialongo, Mark van der Wilk, James Hensman, Carl Edward Rasmussen

Rates of Convergence for Sparse Variational Inference in Gaussian Process Regression

David Burt, Carl Edward Rasmussen, Mark van der Wilk

Non-Factorised Variational Inference in Dynamical Systems

Alessandro Davide Ialongo, Mark van der Wilk, James Hensman, Carl Edward Rasmussen

Learning Invariances using the Marginal Likelihood

Mark van der Wilk, Matthias Bauer, ST John, James Hensman