02/2021 – 08/2021
Despite the recent performance advances of Deep Learning in the last decade, most successful models are still treated as black boxes. The realization that model interpretability is very valuable  has led to approaches that aim to interpret models . One of the sub fields of interpretability is visual. Given an image processing task, humans can easily comprehend the models’ attention on parts of the input data and use this information, e.g., to debug the model, or to process data differently. However, these visualization methods are not yet considered for time series problems. In this thesis we want to (1) transfer the visualization techniques to time series data and (2) develop evaluation metrics to compare how well they work.
Visual Interpretability. Many different methods for visual interpretation exist. Basically, they fit into two groups: perturbation-based methods on the one side, and gradient- or saliency-based methods on the other. Perturbation-based methods [6,10,11] directly compute the attribution of an input feature (or set of features) by removing, masking or altering inputs, and running a forward pass on the perturbed input, measuring the difference with the original output. These methods can be applied to any network architecture, because they do not require changes to their architecture. Gradient- or saliency-based methods compute the attributions for all input features in a single forward and backward pass through the network [12,14,15,16]. These methods usually generate saliency maps, which are validated a-posteriori . However, these visualization methods were designed with image processing in mind and are difficult to apply to time series data, e.g., inertial sensor data streams, medical data or positioning trajectories.
Evaluation Metrics. Additionally, it is difficult to evaluate visual interpretability since it is highly subjective and hard to compare. While some approaches exist specifically for image data, i.e., Pointing Game, Location Instability, Sanity Checks and Insertion/Deletion Scores , they were not yet adapted to time series data. On the other hand, existing concepts for TS data could be borrowed for this idea, like the extended accuracy metrics . The aim here is to adapt visualization metrics to time-series data and combine them with existing accuracy metrics  into novel evaluation metrics for visual interpretability. For example, Location Instability will be adapted to Temporal Stability. Furthermore, Sanity Checks  shall be used to measure whether our proposed visualization methods are able to distinguish learned concepts from random noise.
Work plan. In this thesis, both gradient-based and perturbation-based methods will be used to visualize the decision-making of the neural networks in time series classification (Gradients, Integrated Gradients, GradCAM, LRP, LIME, SHAP). The evaluation methods will be separated for densely and sparsely labeled datasets because Pointing Game  is only suitable for densely-labeled data. It will be extended with precision and recall for time series . Furthermore, for both types of datasets, this thesis evaluates the randomization tests from Sanity Checks , Insertion/Deletion Scores  and Temporal Instability (inspired by Location Instability). Where possible, methods are based on published code that is adapted to time-series data. Experiments will be performed on a reasonable subset of the public UCR archive (e.g. GunPoint, Handwriting), which are mainly windowed time series data (sparse labels), as well the Tool Tracking dataset from Fraunhofer IIS (dense labels) .
 Zhang, Q., Zhu, S: “Visual Interpretability for Deep Learning: A Survey”. In: Frontiers of Information Technology and Electronic Engineering, 2008
 D. Carvalho, E. Pereira, J. Cardoso: “Machine learning interpretability: A survey on methods and metrics”. In: Electronics (Switzerland), 2019
 Tatbul et al.: “Precision and Recall for Time Series”. In: NeurIPS, 2018, Montréal, Canada
 Lundberg, Scott M., and Su-In Lee. “A unified approach to interpreting model predictions.” Advances in Neural Information Processing Systems. 2017.
 Petsiuk, V., Das, A. and Saenko, K., 2020. RISE: Randomized Input Sampling For Explanation Of Black-Box Models. https://arxiv.org/abs/1806.07421
 Adebayo, J. (2018b, October 8). Sanity Checks for Saliency Maps. https://arxiv.org/abs/1810.03292
 Ancona, M. (2017, November 16). Towards better understanding of gradient-based attribution methods for Deep Neural Networks. https://arxiv.org/abs/1711.06104
 Ribeiro, M. T. (2016, February 16). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. https://arxiv.org/abs/1602.04938
 Fong, R. (2019, October 18). Understanding Deep Networks via Extremal Perturbations and Smooth Masks. https://arxiv.org/abs/1910.08485
 Bach S, Binder A, Montavon G, Klauschen F, Müller K-R, Samek W (2015) On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS ONE 10(7): e0130140.
 Zhang, J. (2016, August 1). Top-down Neural Attention by Excitation Backprop. [ArXiv.Org](http://arxiv.org/).
 Selvaraju, R. R. (2016, October 7). Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. https://arxiv.org/abs/1610.02391)
 Sundararajan, M., Taly, A., & Yan, Q. (2017). Axiomatic Attribution for Deep Networks. ArXiv, abs/1703.01365.
 K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps.” 2013.