Reassessing EMNLP 2024’s Best Paper: Does Divergence-Based Calibration for Membership Inference Attacks Hold Up?

Anshuman Suri and Pratyush Maini wrote a blog about the EMNLP 2024 best paper award winner: Reassessing EMNLP 2024’s Best Paper: Does Divergence-Based Calibration for Membership Inference Attacks Hold Up?. As we explored in Do Membership Inference Attacks Work on Large Language Models?, to test a membership inference attack it is essentail to have a candidate set where the members and non-members are from the same distribution. If the distributions are different, the ability of an attack to distinguish members and non-members is indicative of distribution inference, not necessarily membership inference.

Read More…

Common Way To Test for Leaks in Large Language Models May Be Flawed

UVA News has an article on our LLM membership inference work: Common Way To Test for Leaks in Large Language Models May Be Flawed: UVA Researchers Collaborated To Study the Effectiveness of Membership Inference Attacks, Eric Williamson, 13 November 2024.

Poisoning LLMs

I’m quoted in this story by Rob Lemos about poisoning code models (the CodeBreaker paper in USENIX Security 2024 by Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, and Yuan Hong), that considers a similar threat to our TrojanPuzzle work: Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable Code Dark Reading, 20 August 2024 CodeBreaker uses code transformations to create vulnerable code that continues to function as expected, but that will not be detected by major static analysis security testing.

Read More…

Google's Trail of Crumbs

Matt Stoller published my essay on Google’s decision to abandon its Privacy Sandbox Initiative in his Big newsletter:

Google's Trail of Crumbs by Matt Stoller

Google is too big to get rid of cookies. Even when it wants to protect users, it can't.

Read on Substack

For more technical background on this, see Minjun’s paper: Evaluating Google’s Protected Audience Protocol in PETS 2024.

Technology: US authorities survey AI ecosystem through antitrust lens

I’m quoted in this article for the International Bar Association: Technology: US authorities survey AI ecosystem through antitrust lens William Roberts, IBA US Correspondent Friday 2 August 2024 Antitrust authorities in the US are targeting the new frontier of artificial intelligence (AI) for potential enforcement action. … Jonathan Kanter, Assistant Attorney General for the Antitrust Division of the DoJ, warns that the government sees ‘structures and trends in AI that should give us pause’.

Read More…

Voice of America interview on ChatGPT

I was interviewed for a Voice of America story (in Russian) on the impact of chatGPT and similar tools.

Full story: https://youtu.be/dFuunAFX9y4

Uh-oh, there's a new way to poison code models

Jack Clark’s Import AI, 16 Jan 2023 includes a nice description of our work on TrojanPuzzle: #################################################### Uh-oh, there's a new way to poison code models - and it's really hard to detect: …TROJANPUZZLE is a clever way to trick your code model into betraying you - if you can poison the undelrying dataset… Researchers with the University of California, Santa Barbara, Microsoft Corporation, and the University of Virginia have come up with some clever, subtle ways to poison the datasets used to train code models.

Read More…

Trojan Puzzle attack trains AI assistants into suggesting malicious code

Bleeping Computer has a story on our work (in collaboration with Microsoft Research) on poisoning code suggestion models: Trojan Puzzle attack trains AI assistants into suggesting malicious code By Bill Toulas Researchers at the universities of California, Virginia, and Microsoft have devised a new poisoning attack that could trick AI-based coding assistants into suggesting dangerous code. Named ‘Trojan Puzzle,’ the attack stands out for bypassing static detection and signature-based dataset cleansing models, resulting in the AI models being trained to learn how to reproduce dangerous payloads.

Read More…

How to Hide a Backdoor

The Register has an article on our recent work on Stealthy Backdoors as Compression Artifacts: Thomas Claburn, How to hide a backdoor in AI software — Neural networks can be aimed to misbehave when squeezed, The Register, 5 May 2021.

Fact-checking Donald Trump’s tweet firing Christopher Krebs

I was a source for thie “Pants on Fire!” fact check by PolitiFact on Donald Trump’s tweet that fired Christopher Krebs claiming that “The recent statement by Chris Krebs on the security of the 2020 Election was highly inaccurate, in that there were massive improprieties and fraud - including dead people voting, Poll Watchers not allowed into polling locations, “glitches” in the voting machines which changed…” PolitiFact: Fact-checking Donald Trump’s tweet firing Christopher Krebs, 18 November 2020

Read More…

All Posts by Category or Tags.