In recent years, a variety of kernel two-sample tests for model selection based on maximum mean discrepancy was developed. This thesis gives an overview over the most relevant approaches. These include two tests based on uniform convergence bounds, the permutation bootstrap-MMD test, the linear-MMD test, the block-MMD test, the cross-MMD test and the aggregated-MMD test. In order to introduce the theory of the tests, the necessary prerequisites from functional analysis such as reproducing kernel Hilbert spaces and mean embeddings are provided. Moreover, an explanation of U-statistics and their most important properties is given. Subsequently the notion of maximum mean discrepancy is defined and the tests are presented. Furthermore, the latter are assessed by their performance in a simulation study. In particular, the performance of the tests is compared on samples from an absolutely continuous, a discrete and a singular distribution. Additionally, we investigate the influence of an inter-marginal correlation.
«
In recent years, a variety of kernel two-sample tests for model selection based on maximum mean discrepancy was developed. This thesis gives an overview over the most relevant approaches. These include two tests based on uniform convergence bounds, the permutation bootstrap-MMD test, the linear-MMD test, the block-MMD test, the cross-MMD test and the aggregated-MMD test. In order to introduce the theory of the tests, the necessary prerequisites from functional analysis such as reproducing kernel...
»