Rosanna Dietrich-Sußner

Rosanna Dietrich-Sußner

Master's Thesis

Continual Learning in Changing Environments in 5G Positioning

Advisors
Christopher Löffler (M.Sc.),  Prof. Dr. Björn Eskofier, Dr. Georgios Kontes (Fraunhofer IIS)

Duration
07/2021 – 06/2022

Abstract
It is commonly assumed, that artificial intelligence enables computers to learn and adapt to changing environments with intelligence, just like humans. However, this is mostly out of reach due to catastrophic forgetting of previously learned concepts in a sequential learning setting. Continual Learning (CL) addresses this for gradient based Deep Learning.

One of the many applications, that would benefit from remembering past experiences, is 5G positioning (Channel Impulse Response fingerprinting using Convolutional Neural Networks (CNNs) [1]). There, environments are continually changing, at times even cyclically: In an exemplary industrial 5G setup, see Fig. 1 and Fig. 2, a scheduled delivery truck blocks and refracts signals regularly. This causes CNN-models (e.g., ResNet[1]) to degenerate and to predict erroneous positions. Simply updating the training dataset overwrites valuable older knowledge and fine tuning is expensive. Continual Learning promises to remember old environments, so that new data may not be required at all, especially if changes repeat often.

The choice of a suitable CL method depends on the application’s requirements: Is the model’s size bounded? Is the current task, i.e., changes in the 5G environment, known or unknown? Do these tasks overlap or cycle? A diverse family of methods is tailored to different requirements, like the regularization-based Elastic Weight Consolidation[2], the structural Progressive Neural Networks[3] and the replay-based Gradient Episodic Memory[4]. Best performing methods, however, use a small replay buffer that essentially functions as a sample-wise memory to re-train the gradient-based methods. Recent studies show the effectiveness of this approach while maintaining a versatile applicability. Structural methods are closer to the existing method of fine tuning but require the detection of tasks to select regression heads correctly. Regularization-based methods severely underperform under fair experimental conditions.

Hence, this thesis focuses on the evaluation of suitable replay-based or structural Continual Learning methods for changing 5G positioning environments. The baseline methods’ forgetting is evaluated in changing environments. An extension of replay-based methods with uncertainty quantification, such as ensembles [16], or an increased usability of the replay buffer, such as generative models [17][14] or Mixup data augmentation[15], are implemented. Alternatively, an extension of structural methods, such as task detection, is implemented and evaluated for the 5G application. Optionally, the methods’ required number of samples is minimized.

References:
[1] Niitsoo, A., Edelhäußer, T., Eberlein, E., Hadaschik, N., & Mutschler, C. (2019). A deep learning approach to position estimation from channel impulse responses. Sensors, 19(5), 1064.
[2] Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., … & Hadsell, R. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13), 3521-3526.
[3] Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., … & Hadsell, R. (2016). Progressive neural networks. arXiv preprint arXiv:1606.04671.
[4] Lopez-Paz, D., & Ranzato, M. A. (2017). Gradient episodic memory for continual learning. arXiv preprint arXiv:1706.08840.
[5] https://sites.google.com/view/continual2018/home
[6] Prabhu, A., Torr, P. H., & Dokania, P. K. (2020, August). GDumb: A simple approach that questions our progress in continual learning. In European Conference on Computer Vision (pp. 524-540). Springer, Cham.
[7] De Lange, M., Aljundi, R., Masana, M., Parisot, S., Jia, X., Leonardis, A., … & Tuytelaars, T. (2019). Continual learning: A comparative study on how to defy forgetting in classification tasks. arXiv preprint arXiv:1909.08383, 2(6).[8] Mai, Z., Li, R., Jeong, J., Quispe, D., Kim, H., & Sanner, S. (2021). Online continual learning in image classification: An empirical survey. arXiv preprint arXiv:2101.10423.
[9] Farquhar, S., & Gal, Y. (2018). Towards robust evaluations of continual learning. arXiv preprint arXiv:1805.09733.
[10] Chaudhry, A., Rohrbach, M., Elhoseiny, M., Ajanthan, T., Dokania, P. K., Torr, P. H., & Ranzato, M. A. (2019). On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486.
[11] Aljundi, R., Caccia, L., Belilovsky, E., Caccia, M., Lin, M., Charlin, L., & Tuytelaars, T. (2019). Online continual learning with maximally interfered retrieval. arXiv preprint arXiv:1908.04742.
[12] Aljundi, R., Lin, M., Goujaud, B., & Bengio, Y. (2019). Gradient based sample selection for online continual learning. arXiv preprint arXiv:1903.08671.
[13] Wiewel, F., & Yang, B. (2021, January). Entropy-based Sample Selection for Online Continual Learning. In 2020 28th European Signal Processing Conference (EUSIPCO) (pp. 1477-1481). IEEE.
[14] Shin, H., Lee, J. K., Kim, J., & Kim, J. (2017). Continual learning with deep generative replay. arXiv preprint arXiv:1705.08690.
[15] Zhang, H., Cisse, M., Dauphin, Y. N., & Lopez-Paz, D. (2017). mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412.
[16] Lakshminarayanan, B., Pritzel, A., & Blundell, C. (2016). Simple and scalable predictive uncertainty estimation using deep ensembles. arXiv preprint arXiv:1612.01474.
[17] Goodfellow, I. (2017). NIPS 2016 Tutorial: Generative Adversarial Networks. arXiv preprint arXiv:1701.00160.