Security and Privacy Research at the University of Virginia

Our research seeks to empower individuals and organizations to control how their data is used. We use techniques from cryptography, programming languages, machine learning, operating systems, and other areas to both understand and improve the privacy and security of computing as practiced today, and as envisioned in the future. A major current focus is on adversarial machine learning.

SRG lunch
SRG Leap Day Lunch (29 February 2024)

We are part of the NSF AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION) which seeks to change the way mission-critical systems are protected against sophisticated security threats. Collaboration with UC Santa Barbara (lead), Purdue, UC Berkeley, U Chicago, Georgia Tech, NSU, Rutgers, UIUC, UIC, UW, and WWU.
We are members of the NSF SaTC Frontier Center for Trustworthy Machine Learning (CTML) focused on developing a rigorous understanding of machine learning vulnerabilities and producing tools, metrics, and methods to mitigate them. Collabortion with the University of Wisconsin (lead), UC Berkeley, UC San Diego, and Stanford.
Recent Posts

Can we explain AI model outputs?

I gave a short talk on explanability at the Virginia Journal of Social Policy and the Law Symposium on Artificial Intelligence at UVA Law School, 21 February 2025.

Can we explain AI model outputs? (PDF)


Une expérience immersive et enrichissante

I had a chance to talk (over zoom) about visual cryptography to students in an English class in a French high school in Spain!


School Website Post


Reassessing EMNLP 2024’s Best Paper: Does Divergence-Based Calibration for Membership Inference Attacks Hold Up?

Anshuman Suri and Pratyush Maini wrote a blog about the EMNLP 2024 best paper award winner: Reassessing EMNLP 2024’s Best Paper: Does Divergence-Based Calibration for Membership Inference Attacks Hold Up?.

As we explored in Do Membership Inference Attacks Work on Large Language Models?, to test a membership inference attack it is essentail to have a candidate set where the members and non-members are from the same distribution. If the distributions are different, the ability of an attack to distinguish members and non-members is indicative of distribution inference, not necessarily membership inference.

The post describes experiments showing that the PatentMIA used in the EMNLP paper provides a false measure of membership inference.


Common Way To Test for Leaks in Large Language Models May Be Flawed

UVA News has an article on our LLM membership inference work: Common Way To Test for Leaks in Large Language Models May Be Flawed: UVA Researchers Collaborated To Study the Effectiveness of Membership Inference Attacks, Eric Williamson, 13 November 2024.


Meet Professor Suya!