Due to the increasing amount of data, machine learning algorithms have gained importance,
as they automatically process large data sets. These algorithms are calibrated by hyper-
parameters, that need to be chosen carefully. Hyper-parameter optimization methods are
used to automatize this process. This thesis discusses hyper-parameter optimization in
the context of sparse grid density estimation. First, the concept of density estimation is
introduced, followed by classification and clustering, two common types of machine learning
that can be based on it. Afterwards, hyper-parameters are defined, and various methods
to optimize them are presented. Grid search, random search, and Bayesian optimization
are discussed in theory and in the context of my implementation. These methods are
used to optimize the hyper-parameters for normal and adaptive classification. Finally, the
performance of the implemented methods is analyzed and compared to that of the open
source software hyperopt.
«
Due to the increasing amount of data, machine learning algorithms have gained importance,
as they automatically process large data sets. These algorithms are calibrated by hyper-
parameters, that need to be chosen carefully. Hyper-parameter optimization methods are
used to automatize this process. This thesis discusses hyper-parameter optimization in
the context of sparse grid density estimation. First, the concept of density estimation is
introduced, followed by classification and clusteri...
»