Adversarial Machine Learning Study Group

Meetings on Mondays, 11am (Summer 2020)

Leaders: Hannah Chen, Anshuman Suri

25 May 2020 Nicolas Carlini, Lessons Learned from Evaluating the Robustness of Defenses to Adversarial Examples (USENIX Security 2019)
1 June 2020 Aleksander Madry, A New Perspective on Adversarial Perturbations (Simons Institute, July 2019)
8 June 2020

Sadia Afroz, How to Build Realistic Machine Learning Systems for Security? (USENIX Enigma 2020)

Ji Gao, Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers (Deep Learning and Security Workshop 2018)

15 June 2020 (Joint with Privacy Study Group)

Mathias Lécuyer, Certified Robustness to Adversarial Examples with Differential Privacy (IEEE Security and Privacy 2019)

Liwei Song, Membership Inference Attacks against Adversarially Robust Deep Learning Models (IEEE Security and Privacy 2019)

22 June 2020

Yujie Ji, Model-Reuse Attacks on Deep Learning Systems (ACM CCS 2018)

Roei Schuster, Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning (IEEE Security and Privacy 2020)

29 June 2020

Ambra Demontis, Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks (USENIX Security 2019)

Jianbo Chen, HopSkipJumpAttack: A Query-Efficient Decision-Based Attack (IEEE Security and Privacy 2020)

6 July 2020

Nicholas Carlini, Audio Adversarial Examples: Targeted Attacks on Speech-to-Text (IEEE Security and Privacy 2018)

Xuejing Yuan, CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition (USENIX Security 2018)

Laser-Based Audio Injection on Voice-Controllable Systems (USENIX Security 2020)

13 July 2020

Alhussein Fawzi, Universal Adversarial Perturbations (CVPR 2017)

Eric Wallace, Universal Adversarial Triggers for Attacking and Analyzing NLP (EMNLP-IJCNLP 2019)

20 July 2020

Michael Xie, Understanding and Mitigating the Tradeoff between Robustness and Accuracy (ICML 2020)

Eric Wong, Overfitting in adversarially robust deep learning (ICML 2020)

27 July 2020

Francesco Croce, Reliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-free Attacks (ICML 2020)

Pratyush Maini, Adversarial Robustness Against the Union of Multiple Perturbation Models (ICML 2020)

Join our slack group for announcements about future meetings.