Modern machine learning methods lack of interpretability, it is often unclear
why a certain input leads to a certain output. Over the last years SHAP has
become a popular method to address this issue. The models that we want to
explain are introduced, GLMs and LightGBMs. We explain the theoretical
concepts behind SHAP and how it is related to game theory, then we find closed
form solutions for the models that are currently used at Allianz. The issue of
computational complexity is also discussed, SHAP values can be obtained by
regression. We give an overview about different methods of removing features
and argue, why the currently implemented way of SHAP is the right one for us.
The SHAP package is presented and its specific plots are introduced, we provide
some hints about working with the package. We propose a new method to
detect interactions based on SHAP values. A visualization tool to interactively
work with SHAP values is presented. Finally, we summarize the results of
our case study, where we explained three models with SHAP and compared
the results; based on the SHAP values of the LightGBM models we tried to
improve the existing Emblem model.
«
Modern machine learning methods lack of interpretability, it is often unclear
why a certain input leads to a certain output. Over the last years SHAP has
become a popular method to address this issue. The models that we want to
explain are introduced, GLMs and LightGBMs. We explain the theoretical
concepts behind SHAP and how it is related to game theory, then we find closed
form solutions for the models that are currently used at Allianz. The issue of
computational complexity...
»