O’Reilly Security 2016: Classifiers Under Attack
4 November 2016I gave a talk on Weilin Xu’s work (in collaboration with Yanjun Qi) on evading machine learning classifiers at the O’Reilly Security Conference in New York: Classifiers Under Attack, 2 November 2016.
Machine-learning models are popular in security tasks such as malware detection, network intrusion detection, and spam detection. These models can achieve extremely high accuracy on test datasets and are widely used in practice.
However, these results are for particular test datasets. Unlike other fields, security tasks involve adversaries responding to the classifier. For example, attackers may try to generate new malware deliberately designed to evade existing classifiers. This breaks the assumption of machine-learning models that the training data and the operational data share the same data distribution. As a result, it is important to consider attackers’ efforts to disrupt or evade the generated models.
David Evans provides an introduction to the techniques adversaries use to circumvent machine-learning classifiers and presents case studies of machine classifiers under attack. David then outlines methods for automatically predicting the robustness of a classifier when used in an adversarial context and techniques that may be used to harden a classifier to decrease its vulnerability to attackers.