Download PDFOpen PDF in browser

A Quantitative Comparison of Image Classification models under Adversarial attacks and defenses

EasyChair Preprint no. 5946

5 pagesDate: June 28, 2021

Abstract

In this paper, we present a comparison of the performance of two state-of-the-art model architectures under Adversarial attacks. These are attacks that are designed to trick trained machine learning models. The models compared in this paper perform commendably on the popular image classification dataset CIFAR-10. To generate these adversarial examples for the attack, we are using two strategies, the first one being a very popular attack based on the L∞ metric. And the other one is a relatively new technique that covers fundamentally different types of adversarial examples generated using the Wasserstein distance. We will also be applying two adversarial defenses, preprocessing the input and adversarial training. The comparative results show that even these new state-of-the-art techniques are susceptible to adversarial attacks. Also, we concluded that more studies on adversarial defenses are required and current defense techniques must be adopted in real-world applications.

Keyphrases: Adversarial, adversarial attacks, Adversarial Defences, computer vision, Feature Squeezing, Lp-norm, Median Filter, Vision Transformer, Wasserstein, Wide ResNet

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:5946,
  author = {Sarthak Kathuria and Kartikeya Khullar and Nishant Chahar and Prince Gupta and Preeti Kaur},
  title = {A Quantitative Comparison of Image Classification models under Adversarial attacks and defenses},
  howpublished = {EasyChair Preprint no. 5946},

  year = {EasyChair, 2021}}
Download PDFOpen PDF in browser