Archive for the 'Machine Learning' Category

Dependable and Secure Machine Learning

Saturday, July 7th, 2018

I co-organized, with Homa Alemzadeh and Karthik Pattabiraman, a workshop on trustworthy machine learning attached to DSN 2018, in Luxembourg: DSML: Dependable and Secure Machine Learning.

Feature Squeezing at NDSS

Sunday, February 25th, 2018

Weilin Xu presented Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks at the Network and Distributed System Security Symposium 2018. San Diego, CA. 21 February 2018.



Paper: Weilin Xu, David Evans, Yanjun Qi. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. NDSS 2018. [PDF]

Project Site

In the Red Corner…

Monday, August 7th, 2017

The Register has a story on the work Anant Kharkar and collaborators at Endgame, Inc. are doing on using reinforcement learning to find evasive malware: In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it, by Katyanna Quach, The Register, 2 August 2017.



Antivirus makers want you to believe they are adding artificial intelligence to their products: software that has learned how to catch malware on a device. There are two potential problems with that. Either it’s marketing hype and not really AI – or it’s true, in which case don’t forget that such systems can still be hoodwinked.

It’s relatively easy to trick machine-learning models – especially in image recognition. Change a few pixels here and there, and an image of a bus can be warped so that the machine thinks it’s an ostrich. Now take that thought and extend it to so-called next-gen antivirus.

The researchers from Endgame and the University of Virginia are hoping that by integrating the malware-generating system into OpenAI’s Gym platform, more developers will help sniff out more adversarial examples to improve machine-learning virus classifiers.

Although Evans believes that Endgame’s research is important, using such a method to beef up security “reflects the immaturity” of AI and infosec. “It’s mostly experimental and the effectiveness of defenses is mostly judged against particular known attacks, but doesn’t say much about whether it can work against newly discovered attacks,” he said.

“Moving forward, we need more work on testing machine learning systems, reasoning about their robustness, and developing general methods for hardening classifiers that are not limited to defending against particular attacks. More broadly, we need ways to measure and build trustworthiness in AI systems.”

The research has been summarized as a paper, here if you want to check it out in more detail, or see the upstart’s code on Github.

Feature Squeezing: Detecting Adversarial Examples

Monday, April 10th, 2017

Although deep neural networks (DNNs) have achieved great success in many computer vision tasks, recent studies have shown they are vulnerable to adversarial examples. Such examples, typically generated by adding small but purposeful distortions, can frequently fool DNN models. Previous studies to defend against adversarial examples mostly focused on refining the DNN models. They have either shown limited success or suffer from expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample.

By comparing a DNN model’s prediction on the original input with that on the squeezed input, feature squeezing detects adversarial examples with high accuracy and few false positives. If the original and squeezed examples produce substantially different outputs from the model, the input is likely to be adversarial. By measuring the disagreement among predictions and selecting a threshold value, our system outputs the correct prediction for legitimate examples and rejects adversarial inputs.

So far, we have explored two instances of feature squeezing: reducing the color bit depth of each pixel and smoothing using a spatial filter. These strategies are straightforward, inexpensive, and complementary to defensive methods that operate on the underlying model, such as adversarial training.

The figure shows the histogram of the L1 scores on the MNIST dataset between the original and squeezed sample, for 1000 non-adversarial examples as well as 1000 adversarial examples generated using both the Fast Gradient Sign Method and the Jacobian-based Saliency Map Approach. Over the full MNIST testing set, the detection accuracy is 99.74% (only 22 out of 5000 fast positives).

For more information, see the paper:

Weilin Xu, David Evans, Yanjun Qi. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. arXiv preprint, 4 April 2017. [PDF]

Project Site: EvadeML

Enigma 2017 Talk: Classifiers under Attack

Monday, March 6th, 2017

The video for my Enigma 2017 talk, “Classifiers under Attack” is now posted:



The talk focuses on work with Weilin Xu and Yanjun Qi on automatically evading malware classifiers using techniques from genetic programming. (See EvadeML.org for more details and links to code and papers, although some of the work I talked about at Enigma has not yet been published.)

Enigma was an amazing conference – one of the most worthwhile, and definitely the most diverse security/privacy conference I’ve been to in my career, both in terms of where people were coming from (nearly exactly 50% from industry and 50% from academic/government/non-profits), intellectual variety (range of talks from systems and crypto to neuroscience, law, and journalism), and the demographics of the attendees and speakers (not to mention a way-cool stage setup).

The model of having speakers do on-line practice talks with their session was also very valuable (Enigma requires speakers to agree to do three on-line practice talks sessions before the conference, and from what I hear most speakers and sessions did cooperate with this, and it showed in the quality of the sessions) and something I hope other conference will be able to adopt. You actually end up with talks that fit with each other, build of things others present, and avoid unnecessary duplication, as well as, improving all the talks by themselves.

Private Multi‑Party Machine Learning

Thursday, August 18th, 2016

I’m co-organizing a workshop to be held in conjunction with NIPS on Private Multi‑Party Machine Learning, along with Borja Balle, Aurélien Bellet, Adrià Gascón. The one-day workshop will be held Dec 9 or Dec 10 in Barcelona.

NIPS workshops are different from typical workshops attached to computer security conferences, with lots of invited talks (and we have some great speakers lined up for PMPML16), but there is also an opportunity for researchers to submit short papers to be presented at the workshop either as short talks or posters.