Meet Professor Suya!
Meet Assistant Professor Fnu Suya. His research interests include the application of machine learning techniques to security-critical applications and the vulnerabilities of machine learning models in the presence of adversaries, generally known as trustworthy machine learning. pic.twitter.com/8R63QSN8aO
— EECS (@EECS_UTK) October 7, 2024
Poisoning LLMs
I’m quoted in this story by Rob Lemos about poisoning code models (the CodeBreaker paper in USENIX Security 2024 by Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, and Yuan Hong), that considers a similar threat to our TrojanPuzzle work: Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable Code Dark Reading, 20 August 2024 CodeBreaker uses code transformations to create vulnerable code that continues to function as expected, but that will not be detected by major static analysis security testing.The Mismeasure of Man and Models
Evaluating Allocational Harms in Large Language Models Blog post written by Hannah Chen Our work considers allocational harms that arise when model predictions are used to distribute scarce resources or opportunities. Current Bias Metrics Do Not Reliably Reflect Allocation Disparities Several methods have been proposed to audit large language models (LLMs) for bias when used in critical decision-making, such as resume screening for hiring. Yet, these methods focus on predictions, without considering how the predictions are used to make decisions.Google's Trail of Crumbs
Matt Stoller published my essay on Google’s decision to abandon its Privacy Sandbox Initiative in his Big newsletter:
For more technical background on this, see Minjun’s paper: Evaluating Google’s Protected Audience Protocol in PETS 2024.
Technology: US authorities survey AI ecosystem through antitrust lens
I’m quoted in this article for the International Bar Association: Technology: US authorities survey AI ecosystem through antitrust lens William Roberts, IBA US Correspondent Friday 2 August 2024 Antitrust authorities in the US are targeting the new frontier of artificial intelligence (AI) for potential enforcement action. … Jonathan Kanter, Assistant Attorney General for the Antitrust Division of the DoJ, warns that the government sees ‘structures and trends in AI that should give us pause’.John Guttag Birthday Celebration
Maggie Makar organized a celebration for the 75th birthday of my PhD advisor, John Guttag.
I wasn’t able to attend in person, unfortunately, but the occasion provided an opportunity to create a poster that looks back on what I’ve done since I started working with John over 30 years ago.
Congratulations, Dr. Suri!
Congratulations to Anshuman Suri for successfully defending his PhD thesis! Tianhao Wang, Dr. Anshuman Suri, Nando Fioretto, Cong Shen On Screen: David Evans, Giuseppe Ateniese Inference Privacy in Machine Learning Using machine learning models comes at the risk of leaking information about data used in their training and deployment. This leakage can expose sensitive information about properties of the underlying data distribution, data from participating users, or even individual records in the training data.Graduation 2024
Congratulations to our two PhD graduates!
Suya will be joining the University of Tennessee at Knoxville as an Assistant Professor.
Josie will be building a medical analytics research group at Dexcom.
SaTML Talk: SoK: Pitfalls in Evaluating Black-Box Attacks
Anshuman Suri’s talk at IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) is now available:
See the earlier blog post for more on the work, and the paper at https://arxiv.org/abs/2310.17534.