Poisoning LLMs
I’m quoted in this story by Rob Lemos about poisoning code models (the CodeBreaker paper in USENIX Security 2024 by Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, and Yuan Hong), that considers a similar threat to our TrojanPuzzle work: Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable Code Dark Reading, 20 August 2024 CodeBreaker uses code transformations to create vulnerable code that continues to function as expected, but that will not be detected by major static analysis security testing.Google's Trail of Crumbs
Matt Stoller published my essay on Google’s decision to abandon its Privacy Sandbox Initiative in his Big newsletter:
For more technical background on this, see Minjun’s paper: Evaluating Google’s Protected Audience Protocol in PETS 2024.
Technology: US authorities survey AI ecosystem through antitrust lens
I’m quoted in this article for the International Bar Association: Technology: US authorities survey AI ecosystem through antitrust lens William Roberts, IBA US Correspondent Friday 2 August 2024 Antitrust authorities in the US are targeting the new frontier of artificial intelligence (AI) for potential enforcement action. … Jonathan Kanter, Assistant Attorney General for the Antitrust Division of the DoJ, warns that the government sees ‘structures and trends in AI that should give us pause’.Voice of America interview on ChatGPT
I was interviewed for a Voice of America story (in Russian) on the impact of chatGPT and similar tools.
Full story: https://youtu.be/dFuunAFX9y4
Uh-oh, there's a new way to poison code models
Jack Clark’s Import AI, 16 Jan 2023 includes a nice description of our work on TrojanPuzzle: #################################################### Uh-oh, there's a new way to poison code models - and it's really hard to detect: …TROJANPUZZLE is a clever way to trick your code model into betraying you - if you can poison the undelrying dataset… Researchers with the University of California, Santa Barbara, Microsoft Corporation, and the University of Virginia have come up with some clever, subtle ways to poison the datasets used to train code models.Trojan Puzzle attack trains AI assistants into suggesting malicious code
Bleeping Computer has a story on our work (in collaboration with Microsoft Research) on poisoning code suggestion models: Trojan Puzzle attack trains AI assistants into suggesting malicious code By Bill Toulas Researchers at the universities of California, Virginia, and Microsoft have devised a new poisoning attack that could trick AI-based coding assistants into suggesting dangerous code. Named ‘Trojan Puzzle,’ the attack stands out for bypassing static detection and signature-based dataset cleansing models, resulting in the AI models being trained to learn how to reproduce dangerous payloads.How to Hide a Backdoor
The Register has an article on our recent work on Stealthy Backdoors as Compression Artifacts: Thomas Claburn, How to hide a backdoor in AI software — Neural networks can be aimed to misbehave when squeezed, The Register, 5 May 2021.
Fact-checking Donald Trump’s tweet firing Christopher Krebs
I was a source for thie “Pants on Fire!” fact check by PolitiFact on Donald Trump’s tweet that fired Christopher Krebs claiming that “The recent statement by Chris Krebs on the security of the 2020 Election was highly inaccurate, in that there were massive improprieties and fraud - including dead people voting, Poll Watchers not allowed into polling locations, “glitches” in the voting machines which changed…” PolitiFact: Fact-checking Donald Trump’s tweet firing Christopher Krebs, 18 November 2020Voting Security
I was interviewed for a local news story by Daniel Grimes on election security: UVA cybersecurity expert: Virginia is one of the safer states to cast a ballot, NBC 29 News, 21 October 2020.