We make several novel contributions to aspects of the Bayesian approach to inverse problems: well-posedness, discretisation, and algorithms. The first main contribution of this work is a new concept of well-posedness of Bayesian inverse problems. In contrast to the existing concepts, our slightly simplified concept allows us to make well-posedness statements with respect to general mathematical models. Under e.g. finite-dimensional, non-degenerate Gaussian noise assumptions, we only need to show measurability of the underlying model.
Next, we move on to the discretisation of Bayesian inverse problems. We consider hierarchical Bayesian inverse problems in which the prior random field is parameterised. Typically, random fields are discretised by truncated spectral expansions, such as the Karhunen–Loève expansion. Such discretisation strategies may adhoc be not suitable in hierarchical settings, since the parameterisation may require a large number of random field discretisations. This is computationally infeasible. The second main contribution of this thesis is a reduced basis method allowing for a computationally cheap, parameterised discretisation.
The solution of a Bayesian inverse problem is the posterior distribution. Sampling from the posterior distribution, with e.g. Markov chain Monte Carlo (MCMC) or Importance Sampling, may be unsuitable if the Bayesian inverse problems are constrained by a computationally tasking mathematical model, e.g. by a partial differential equation (PDE). More suitable are Sequential Monte Carlo strategies that use hierarchies of model discretisations and tempered likelihoods. An adaptive combination of these hierarchies leads to the third main contribution of this work: the highly efficient Multilevel Sequential² Monte Carlo algorithm. We derive this method and compare it numerically with standard Sequential Monte Carlo methods. Moreover, we interpret Sequential Monte Carlo in a framework where it is a Markov chain of random measures.
In this setting, we discuss the long-time behaviour of these Markov chains and, thus, the convergence of Sequential Monte Carlo. This is the fourth main contribution of this work. We conclude by pointing the reader to directions for future research.
«
We make several novel contributions to aspects of the Bayesian approach to inverse problems: well-posedness, discretisation, and algorithms. The first main contribution of this work is a new concept of well-posedness of Bayesian inverse problems. In contrast to the existing concepts, our slightly simplified concept allows us to make well-posedness statements with respect to general mathematical models. Under e.g. finite-dimensional, non-degenerate Gaussian noise assumptions, we only need to show...
»