Deep Leaning models are widely used for time series classification. For understanding the decision-making process of the model and identifying artifacts, explainability methods for these black-box classifiers are necessary. State-of-the-art saliency methods, originally developed for image data, assign importance scores to image pixels, providing visual explainability by highlighting informative regions in images. These methods have also been utilized for time series classification, where they equally highlight informative temporal patterns i.e. shapelets. Nevertheless, in time series data the class label might as well depend on latent information rather than temporal regions, such as a difference in the time series dominant frequencies. In this setting, common explainability methods fail to provide accurate results. We thus identify a need for improvement in explainability methods for time series. To the best of our knowledge, there are no methods currently in the literature that can visually explain how latent-patterned time series are classified. In this thesis, we shed light to this concern by empirically showing the shortcomings of current explainability methods for the mentioned time series scenario. To offer a solution, we propose an extension for existing methods which provides latent saliency results based on time-step-wise importance scores. In order to find the best candidate to augment with our extension, we examine various explainability method-classifier pairs. We restrict our study to Fourier series models and its corresponding frequency, amplitude and phase shift latent parameters, to provide a sensible scope for the thesis. We argue, nevertheless, that the same approach can be used to solve the problem with respect to other latent models. Our main focus throughout this thesis is on a local latent saliency framework, however, we provide primary remarks about obtaining global latent saliency results as well.
«
Deep Leaning models are widely used for time series classification. For understanding the decision-making process of the model and identifying artifacts, explainability methods for these black-box classifiers are necessary. State-of-the-art saliency methods, originally developed for image data, assign importance scores to image pixels, providing visual explainability by highlighting informative regions in images. These methods have also been utilized for time series classification, where they eq...
»