Vahid Behzadan, Arslan Munir
Proc. of First International Workshop on Artificial Intelligence Safety Engineering (WAISE) @International Conference on Computer Safety, Reliability, and Security (SAFECOMP), Västerås, Sweden, September 2018 (accepted for publication).
Publication year: 2018

Recent developments have established the vulnerability of Deep reinforcement learning to policy manipulation attacks via intentionally perturbed inputs, known as adversarial examples. In this work, we propose a technique for mitigation of such attacks based on addition of noise to the parameter space of deep policy learners during training. We experimentally verify the effect of parameter-space noise in reducing the transferability of adversarial examples, and demonstrate the promising performance of this technique in mitigating the impact of whitebox and blackbox attacks at both test and training times.