Adversarial Machine Learning: Are We Playing the Wrong Game?
10 June 2017I gave a talk at Berkeley’s International Computer Science Institute on Adversarial Machine Learning: Are We Playing the Wrong Game? (8 June 2017), focusing on the work Weilin Xu has been doing (in collaboration with myself and Yanjun Qi) on adversarial machine learning.
Abstract
Machine learning classifiers are increasingly popular for security applications, and often achieve outstanding performance in testing. When deployed, however, classifiers can be thwarted by motivated adversaries who adaptively construct adversarial examples that exploit flaws in the classifier’s model. Much work on adversarial examples, including Carlini and Wagner’s attacks which are the best results to date, has focused on finding small distortions to inputs that fool a classifier. Previous defenses have been both ineffective and very expensive in practice. In this talk, I’ll describe a new very simple strategy, feature squeezing, that can be used to harden classifiers by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different inputs in the original space into a single sample. Adversarial examples can be detected by comparing the model’s predictions on the original and squeezed sample. In practice, of course, adversaries are not limited to small distortions in a particular metric space. Indeed, it may be possible to make large changes to an input without losing its intended malicious behavior. We have developed an evolutionary framework to search for such adversarial examples, and demonstrated that it can automatically find evasive variants against state-of-the-art classifiers. This suggests that work on adversarial machine learning needs a better definition of adversarial examples, and to make progress towards understanding how classifiers and oracles perceive samples differently.