Bayesian methods are often used to solve inverse problems and machine learning tasks. In a Bayesian method, one represents one's state of knowledge about an unknown object of interest using a probability measure, and then iteratively updates this probability measure each time a new data point is obtained, by using a likelihood function and Bayes' formula.
One challenge common to many Bayesian methods is that evaluating the likelihood function for an arbitrary input can be computationally expensive. This motivates the use of cheaper approximations of the likelihood function. Random approximations of the likelihood --- for example, using randomised linear algebra --- have become popular in recent years, because they are often parallelisable. However, since these approximations introduce errors into the probability measure, one must analyse the errors to ensure that they do not 'break' the Bayesian method.
In this lecture, we will present the basic ideas of Bayesian inference, motivate the use of random approximations of the likelihood function using some powerful ideas from mathematics, and analyse the approximation errors of the corresponding randomised Bayesian method.