Learning comprises the ability to infer knowledge from given evidence with the aim to use the knowledge in the course of life. Given the amount of evidence we are confronted with, such a task has to be performed in a highly efficient way. In mathematical terms, we can think of a set of given data, posing the problem to estimate an unknown functional relation capturing the essentials of the data. Dealing with data efficiently, we need to incorporate a bias term, restricting the space of possible hypotheses. Positive definite kernels and their variations provide a powerful tool to mathematically model such a bias term. Kernels describe similarity relations which are used to analyse data. A classical radial positive definite kernel comprises three useful properties: positive definiteness encapturing function spaces and our hypothesis class, translation invariance making the toolbox of harmonic analysis accessible, and radiality leading to some kind of dimension reduction. The thesis considers all three of those aspects on their own, pointing out their theoretical background, exploring their generality and showing how to use them in applications.
«
Learning comprises the ability to infer knowledge from given evidence with the aim to use the knowledge in the course of life. Given the amount of evidence we are confronted with, such a task has to be performed in a highly efficient way. In mathematical terms, we can think of a set of given data, posing the problem to estimate an unknown functional relation capturing the essentials of the data. Dealing with data efficiently, we need to incorporate a bias term, restricting the space of possible...
»