Security and Privacy Research at the University of Virginia

Our research seeks to empower individuals and organizations to control how their data is used. We use techniques from cryptography, programming languages, machine learning, operating systems, and other areas to both understand and improve the privacy and security of computing as practiced today, and as envisioned in the future. A major current focus is on adversarial machine learning.

SRG lunch
SRG Leap Day Lunch (29 February 2024)

We are part of the NSF AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION) which seeks to change the way mission-critical systems are protected against sophisticated security threats. Collaboration with UC Santa Barbara (lead), Purdue, UC Berkeley, U Chicago, Georgia Tech, NSU, Rutgers, UIUC, UIC, UW, and WWU.
We are members of the NSF SaTC Frontier Center for Trustworthy Machine Learning (CTML) focused on developing a rigorous understanding of machine learning vulnerabilities and producing tools, metrics, and methods to mitigate them. Collabortion with the University of Wisconsin (lead), UC Berkeley, UC San Diego, and Stanford.
Recent Posts

Meet Professor Suya!


Poisoning LLMs

I’m quoted in this story by Rob Lemos about poisoning code models (the CodeBreaker paper in USENIX Security 2024 by Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, and Yuan Hong), that considers a similar threat to our TrojanPuzzle work:

Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable Code
Dark Reading, 20 August 2024

CodeBreaker uses code transformations to create vulnerable code that continues to function as expected, but that will not be detected by major static analysis security testing. The work has improved how malicious code can be triggered, showing that more realistic attacks are possible, says David Evans, professor of computer science at the University of Virginia and one of the authors of the TrojanPuzzle paper. ... Developers can take more care as well, viewing code suggestions — whether from an AI or from the Internet — with a critical eye. In addition, developers need to know how to construct prompts to produce more secure code.

Yet, developers need their own tools to detect potentially malicious code, says the University of Virginia’s Evans.

“At most mature software development companies — before code makes it into a production system there is a code review — involving both humans and analysis tools,” he says. “This is the best hope for catching vulnerabilities, whether they are introduced by humans making mistakes, deliberately inserted by malicious humans, or the result of code suggestions from poisoned AI assistants.”

Full Article


The Mismeasure of Man and Models

Evaluating Allocational Harms in Large Language Models

Blog post written by Hannah Chen

Our work considers allocational harms that arise when model predictions are used to distribute scarce resources or opportunities.

Current Bias Metrics Do Not Reliably Reflect Allocation Disparities

Several methods have been proposed to audit large language models (LLMs) for bias when used in critical decision-making, such as resume screening for hiring. Yet, these methods focus on predictions, without considering how the predictions are used to make decisions. In many settings, making decisions involve prioritizing options due to limited resource constraints. We find that prediction-based evaluation methods, which measure bias as the average performance gap (δ) in prediction outcomes, do not reliably reflect disparities in allocation decision outcomes.

Bias scores per group, computed with respect to White Male. δ: average performance gap, measured as the average score difference. ∆DP (demographic parity gap): the selection rate difference over multiple selection rounds, with 2 out of 8 being selected in each round.

Measuring Allocational Harms

We introduce Rank-Allocational-Based Bias Index (RABBI), a model-agnostic bias metric that measures allocational bias using scores derived from model outputs. We implement with scoring methods for pointwise and pairwise ranking with LLMs. Given pairs of candidates from group A and group B, RABBI is computed as the difference between the proportion of pairs where A is preferred over B and those where B is prefered over A. Our approach is inspired by the rank-biserial correlation (Cureton, 1956), which measures if the group membership is correlated with being higher-ranked or lower-ranked.

Rank-Allocational-Based Bias Index (RABBI)

Predictive Validity

We compare bias scores measured with RABBI and traditional bias metrics to allocation gaps (∆DP and ∆EO) measured in simulated candidate selection outcomes. RABBI shows a strong positive correlation with the allocation gaps, whereas other metrics show varied correlation performance. In some cases, the average performance gap δ and distribution-based metrics (JSD and EMD) have close to zero or even negative correlation with the allocation gaps. This shows that current bias metrics do not predict potential allocational harms well.

Measurement between bias metrics and allocation gaps on the resume screening task. Each point represents a score measured between White Male and each of the other groups for a job position. ∆DP: demographic parity gap, ∆EO: equal opportunity gap.

Metric Utility for Model Selection

We evaluate the utility of a metric for model selection by comparing the model fairness ranking derived from bias metrics to an ideal ranking. RABBI demonstrate the highest resemblance to ideal rankings based on the allocation gaps, as reported by the average normalized discounted cumulative gain (NDCG) at rank cutoff N.

Average NDCG@N in ranking model fairness compared to ideal rankings based on ∆DP.

We further compare the fairness ranking of models between different metrics for the resume screening task. RABBI’s ranking aligns more closely with the ranking based on the allocation gap, whereas other metrics tend to rank more biased models higher. This demonstrates the effectiveness of RABBI in selecting models that diminish potential harm.

Fairness ranking of models on the resume screening task. The true rank is based on ∆DP. (Left: overall ranking, Right: ranking per job position)

Conclusion

Our analysis reveal that commonly-used bias metrics based on average performance gap and distribution distance are insufficient to assess allocational harms. We propose an allocational bias measure, which consistently demonstrates better correlations with group disparities in allocation outcomes. Our results underscore the importance of considering how models will be used in deployment to develop reliable auditing methods.

Paper: Hannah Chen, Yangfeng Ji, David Evans. The Mismeasure of Man and Models: Evaluating Allocational Harms in Large Language Models. arXiv preprint, 2 August 2024.

Code: https://github.com/hannahxchen/llm-allocational-harm-eval


Google's Trail of Crumbs

Matt Stoller published my essay on Google’s decision to abandon its Privacy Sandbox Initiative in his Big newsletter:

Google's Trail of Crumbs by Matt Stoller

Google is too big to get rid of cookies. Even when it wants to protect users, it can't.

Read on Substack

For more technical background on this, see Minjun’s paper: Evaluating Google’s Protected Audience Protocol in PETS 2024.


Technology: US authorities survey AI ecosystem through antitrust lens

I’m quoted in this article for the International Bar Association:

Technology: US authorities survey AI ecosystem through antitrust lens
William Roberts, IBA US Correspondent
Friday 2 August 2024

Antitrust authorities in the US are targeting the new frontier of artificial intelligence (AI) for potential enforcement action.

Jonathan Kanter, Assistant Attorney General for the Antitrust Division of the DoJ, warns that the government sees ‘structures and trends in AI that should give us pause’. He says that AI relies on massive amounts of data and computing power, which can give already dominant companies a substantial advantage. ‘Powerful network and feedback effects’ may enable dominant companies to control these new markets, Kanter adds.

Part of the struggle for policymakers is that even defining AI is something of an unfathomable moving target, according to computer scientists. ‘What’s different with AI is, this is making computers do things that no human understands how to do’, says David Evans, a professor of computer science at the University of Virginia who conducts research into machine learning. ‘Instead of humans working either individually or as a group to write a program, AI works by training an algorithm’, he explains.

New foundational AI systems such as ChatGPT’s large language model are based on hundreds of billions of parameters and trained with everything on the internet and terabytes of material from other sources. Newer models are being trained in ways that are ten or 100 times more efficient. ‘Once you scale it up to the size of models that you’re training, and the amount of data that you’re using to train them, it is very hard to predict, or understand, or have any constraints on what they might do’, Evans says.

If anything, that opacity is encouraging the DoJ’s Antitrust Division to take a close look at the AI ecosystem. ‘Over and over again, we see that antitrust enforcement in moments of industrial evolution has the opportunity to spur innovation in its wake, opening the door to new competitors, allowing for the development of different business models and new economies’, Kanter says.

Full Article