New Paper: Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks

Symbolic picture for the article. The link opens the image in a large view.

We are proud to announce that our last paper “Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks” has been accepted for publication at the annual International Joint Conference on Neural Networks (IJCNN), the flagship conference of the IEEE Computational Intelligence Society and the International Neural Network Society.

In this paper, we propose a simple yet effective modification to the gradient calculation of state-of-the-art first-order adversarial attacks. Our approach calculates the gradient direction of the adversarial attack as the weighted average over past gradients of the optimization history. This proves to be effective and allows us to deal with the problem of noisy optimization landscapes that make standard evaluations less robust.

Authors: Leo Schwinn, An Nguyen, René Raab, Dario Zanca, Bjoern Eskofier, Daniel Tenbrinck, Martin Burger