Adjectives Can Reveal Gender Biases Within NLP Models

Post by Jason Briegel and Hannah Chen Because NLP models are trained with human corpora (and now, increasingly on text generated by other NLP models that were originally trained on human language), they are prone to inheriting common human stereotypes and biases. This is problematic, because with their growing prominence they may further propagate these stereotypes (Sun et al., 2019). For example, interest is growing in mitigating bias in the field of machine translation, where systems such as Google translate were observed to default to translating gender-neutral pronouns as male pronouns, even with feminine cues (Savoldi et al.

Read More…

MICO Challenge in Membership Inference

Anshuman Suri wrote up an interesting post on his experience with the MICO Challenge, a membership inference competition that was part of SaTML. Anshuman placed second in the competition (on the CIFAR data set), where the metric is highest true positive rate at a 0.1 false positive rate over a set of models (some trained using differential privacy and some without). Anshuman’s post describes the methods he used and his experience in the competition: My submission to the MICO Challenge.

Read More…

Uh-oh, there's a new way to poison code models

Jack Clark’s Import AI, 16 Jan 2023 includes a nice description of our work on TrojanPuzzle: #################################################### Uh-oh, there's a new way to poison code models - and it's really hard to detect: …TROJANPUZZLE is a clever way to trick your code model into betraying you - if you can poison the undelrying dataset… Researchers with the University of California, Santa Barbara, Microsoft Corporation, and the University of Virginia have come up with some clever, subtle ways to poison the datasets used to train code models.

Read More…

Trojan Puzzle attack trains AI assistants into suggesting malicious code

Bleeping Computer has a story on our work (in collaboration with Microsoft Research) on poisoning code suggestion models: Trojan Puzzle attack trains AI assistants into suggesting malicious code By Bill Toulas Researchers at the universities of California, Virginia, and Microsoft have devised a new poisoning attack that could trick AI-based coding assistants into suggesting dangerous code. Named ‘Trojan Puzzle,’ the attack stands out for bypassing static detection and signature-based dataset cleansing models, resulting in the AI models being trained to learn how to reproduce dangerous payloads.

Read More…

Dissecting Distribution Inference

(Cross-post by Anshuman Suri) Distribution inference attacks aims to infer statistical properties of data used to train machine learning models. These attacks are sometimes surprisingly potent, as we demonstrated in previous work. KL Divergence Attack Most attacks against distribution inference involve training a meta-classifier, either using model parameters in white-box settings (Ganju et al., Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations, CCS 2018), or using model predictions in black-box scenarios (Zhang et al.

Read More…

Attribute Inference attacks are really Imputation

Post by Bargav Jayaraman Attribute inference attacks have been shown by prior works to pose privacy threat against ML models. However, these works assume the knowledge of the training distribution and we show that in such cases these attacks do no better than a data imputataion attack that does not have access to the model. We explore the attribute inference risks in the cases where the adversary has limited or no prior knowledge of the training distribution and show that our white-box attribute inference attack (that uses neuron activations to infer the unknown sensitive attribute) surpasses imputation in these data constrained cases.

Read More…

Balancing Tradeoffs between Fickleness and Obstinacy in NLP Models

Post by Hannah Chen. Our work on balanced adversarial training looks at how to train models that are robust to two different types of adversarial examples: Hannah Chen, Yangfeng Ji, David Evans. Balanced Adversarial Training: Balancing Tradeoffs between Fickleness and Obstinacy in NLP Models. In The 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), Abu Dhabi, 7-11 December 2022. [ArXiv] Adversarial Examples At the broadest level, an adversarial example is an input crafted intentionally to confuse a model.

Read More…

Best Submission Award at VISxAI 2022

Poisoning Attacks and Subpopulation Susceptibility by Evan Rose, Fnu Suya, and David Evans won the Best Submission Award at the 5th Workshop on Visualization for AI Explainability. Undergraduate student Evan Rose led the work and presented it at VISxAI in Oklahoma City, 17 October 2022. Congratulations to #VISxAI's Best Submission Awards: 🏆 K-Means Clustering: An Explorable Explainer by @yizhe_ang https://t.co/BULW33WPzo 🏆 Poisoning Attacks and Subpopulation Susceptibility by Evan Rose, @suyafnu, and @UdacityDave https://t.

Read More…

Visualizing Poisoning

How does a poisoning attack work and why are some groups more susceptible to being victimized by a poisoning attack? We’ve posted work that helps understand how poisoning attacks work with some engaging visualizations: Poisoning Attacks and Subpopulation Susceptibility An Experimental Exploration on the Effectiveness of Poisoning Attacks Evan Rose, Fnu Suya, and David Evans Follow the link to try the interactive version! Machine learning is susceptible to poisoning attacks in which adversaries inject maliciously crafted training data into the training set to induce specific model behavior.

Read More…

All Posts by Category or Tags.