Enhancing Adversarial Robustness of Deep Neural Networks
Author | : Jeffrey Zhang (M. Eng.) |
Publisher | : |
Total Pages | : 58 |
Release | : 2019 |
ISBN-10 | : OCLC:1127291827 |
ISBN-13 | : |
Rating | : 4/5 (27 Downloads) |
Download or read book Enhancing Adversarial Robustness of Deep Neural Networks written by Jeffrey Zhang (M. Eng.) and published by . This book was released on 2019 with total page 58 pages. Available in PDF, EPUB and Kindle. Book excerpt: Logit-based regularization and pretrain-then-tune are two approaches that have recently been shown to enhance adversarial robustness of machine learning models. In the realm of regularization, Zhang et al. (2019) proposed TRADES, a logit-based regularization optimization function that has been shown to improve upon the robust optimization framework developed by Madry et al. (2018) [14, 9]. They were able to achieve state-of-the-art adversarial accuracy on CIFAR10. In the realm of pretrain- then-tune models, Hendrycks el al. (2019) demonstrated that adversarially pretraining a model on ImageNet then adversarially tuning on CIFAR10 greatly improves the adversarial robustness of machine learning models. In this work, we propose Adversarial Regularization, another logit-based regularization optimization framework that surpasses TRADES in adversarial generalization. Furthermore, we explore the impact of trying different types of adversarial training on the pretrain-then-tune paradigm.