Wei-Cheng Lai

Wei-Cheng Lai

Master's Thesis

Visual Interpretability and its Metrics for Time Series Classification

Advisors:

Christoffer Löffler (M.Sc.), Prof. Dr. Björn Eskofier 

Duration:

02/2021 – 08/2021

 

Abstract:

Despite the recent performance advances of Deep Learning in the last decade, most successful models are still treated as black boxes. The realization that model interpretability is very valuable [2] has led to approaches that aim to interpret models [1]. One of the sub fields of interpretability is visual. Given an image processing task, humans can easily comprehend the models’ attention on parts of the input data and use this information, e.g., to debug the model, or to process data differently. However, these visualization methods are not yet considered for time series problems. In this thesis we want to (1) transfer the visualization techniques to time series data and (2) develop evaluation metrics to compare how well they work.

Visual Interpretability. Many different methods for visual interpretation exist. Basically, they fit into two groups: perturbation-based methods on the one side, and gradient- or saliency-based methods[9] on the other. Perturbation-based methods [6,10,11] directly compute the attribution of an input feature (or set of features) by removing, masking or altering inputs, and running a forward pass on the perturbed input, measuring the difference with the original output. These methods can be applied to any network architecture, because they do not require changes to their architecture. Gradient- or saliency-based methods compute the attributions for all input features in a single forward and backward pass through the network [12,14,15,16]. These methods usually generate saliency maps, which are validated a-posteriori [11]. However, these visualization methods were designed with image processing in mind and are difficult to apply to time series data, e.g., inertial sensor data streams, medical data or positioning trajectories.

Evaluation Metrics. Additionally, it is difficult to evaluate visual interpretability since it is highly subjective and hard to compare. While some approaches exist specifically for image data, i.e., Pointing Game[13], Location Instability[1],  Sanity Checks[8] and Insertion/Deletion Scores [7], they were not yet adapted to time series data. On the other hand, existing concepts for TS data could be borrowed for this idea, like the extended accuracy metrics [3]. The aim here is to adapt visualization metrics to time-series data and combine them with existing accuracy metrics [3] into novel evaluation metrics for visual interpretability. For example, Location Instability will be adapted to Temporal Stability. Furthermore, Sanity Checks [8] shall be used to measure whether our proposed visualization methods are able to distinguish learned concepts from random noise.

Work plan. In this thesis, both gradient-based and perturbation-based methods will be used to visualize the decision-making of the neural networks in time series classification (Gradients, Integrated Gradients, GradCAM, LRP, LIME, SHAP). The evaluation methods will be separated for densely and sparsely labeled datasets because Pointing Game [13] is only suitable for densely-labeled data. It will be extended with precision and recall for time series [3]. Furthermore, for both types of datasets, this thesis evaluates the randomization tests from Sanity Checks [8], Insertion/Deletion Scores [7] and Temporal Instability (inspired by Location Instability[1]). Where possible, methods are based on published code that is adapted to time-series data. Experiments will be performed on a reasonable subset of the public UCR archive[4] (e.g. GunPoint, Handwriting), which are mainly windowed time series data (sparse labels), as well the Tool Tracking dataset from Fraunhofer IIS (dense labels) [5].

 

References:

[1] Zhang, Q., Zhu, S: “Visual Interpretability for Deep Learning: A Survey”. In: Frontiers of Information Technology and Electronic Engineering, 2008

[2] D. Carvalho, E. Pereira, J. Cardoso: “Machine learning interpretability: A survey on methods and metrics”. In: Electronics (Switzerland), 2019

[3] Tatbul et al.: “Precision and Recall for Time Series”. In: NeurIPS, 2018, Montréal, Canada

[4] https://www.cs.ucr.edu/~eamonn/time_series_data_2018

[5] https://github.com/mutschcr/tool-tracking

[6] Lundberg, Scott M., and Su-In Lee. “A unified approach to interpreting model predictions.” Advances in Neural Information Processing Systems. 2017.

[7] Petsiuk, V., Das, A. and Saenko, K., 2020. RISE Randomized Input Sampling

For Explanation Of Black-Box Models. https://arxiv.org/abs/1806.07421

[8] Adebayo, J. (2018b, October 8). Sanity Checks for Saliency Maps. https://arxiv.org/abs/1810.03292

[9] Ancona, M. (2017, November 16). Towards better understanding of gradient-based attribution methods for Deep Neural Networks. https://arxiv.org/abs/1711.06104

[10] Ribeiro, M. T. (2016, February 16). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. https://arxiv.org/abs/1602.04938

[11] Fong, R. (2019, October 18). Understanding Deep Networks via Extremal Perturbations and Smooth Masks. https://arxiv.org/abs/1910.08485

[12] Bach S, Binder A, Montavon G, Klauschen F, Müller K-R, Samek W (2015) On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS ONE 10(7): e0130140.

[13] Zhang, J. (2016, August 1). Top-down Neural Attention by Excitation Backprop. [ArXiv.Org](http://arxiv.org/).

[14] Selvaraju, R. R. (2016, October 7). Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. https://arxiv.org/abs/1610.02391)

[15] Sundararajan, M., Taly, A., & Yan, Q. (2017). Axiomatic Attribution for Deep Networks. ArXiv, abs/1703.01365.

[16] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps.” 2013.