Security and Privacy Research at the University of Virginia

Our research seeks to empower individuals and organizations to control how their data is used. We use techniques from cryptography, programming languages, machine learning, operating systems, and other areas to both understand and improve the privacy and security of computing as practiced today, and as envisioned in the future. A major current focus is on adversarial machine learning.

SRG lunch
SRG Leap Day Lunch (29 February 2024)

We are part of the NSF AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION) which seeks to change the way mission-critical systems are protected against sophisticated security threats. Collaboration with UC Santa Barbara (lead), Purdue, UC Berkeley, U Chicago, Georgia Tech, NSU, Rutgers, UIUC, UIC, UW, and WWU.
We are members of the NSF SaTC Frontier Center for Trustworthy Machine Learning (CTML) focused on developing a rigorous understanding of machine learning vulnerabilities and producing tools, metrics, and methods to mitigate them. Collabortion with the University of Wisconsin (lead), UC Berkeley, UC San Diego, and Stanford.
Recent Posts

New Classes Explore Promise and Predicaments of Artificial Intelligence

The Docket (UVA Law News) has an article about the AI Law class I’m helping Tom Nachbar teach:

New Classes Explore Promise and Predicaments of Artificial Intelligence
Attorneys-in-Training Learn About Prompts, Policies and Governance
The Docket, 17 March 2025

Nachbar teamed up with David Evans, a professor of computer science at UVA, to teach the course, which, he said, is “a big part of what makes this class work.”

“This course takes a much more technical approach than typical law school courses do. We have the students actually going in, creating their own chatbots — they’re looking at the technology underlying generative AI,” Nachbar said. Better understanding how AI actually works, Nachbar said, is key in training lawyers to handle AI-related litigation in the future.

“I want my students to have a solid understanding about what’s actually happening under the hood, as it were, so that when they confront a case, they know what kinds of questions to start asking,” he said.

Full Article

Tom and I will co-teach a jointly-listed Law and Computer Science AI Law class in the fall.


Is Taiwan a Country?

I gave a short talk at an NSF workshop to spark research collaborations between researchers in Taiwan and the United States. My talk was about work Hannah Cyberey is leading on steering the internal representations of LLMs:

Steering around Censorship
Taiwan-US Cybersecurity Workshop
Arlington, Virginia
3 March 2025


Can we explain AI model outputs?

I gave a short talk on explanability at the Virginia Journal of Social Policy and the Law Symposium on Artificial Intelligence at UVA Law School, 21 February 2025.

Can we explain AI model outputs? (PDF)

There’s an article about the event in the Virginia Law Weekly: Law School Hosts LawTech Events, 26 February 2025.


Une expérience immersive et enrichissante

I had a chance to talk (over zoom) about visual cryptography to students in an English class in a French high school in Spain!


School Website Post


Reassessing EMNLP 2024’s Best Paper: Does Divergence-Based Calibration for Membership Inference Attacks Hold Up?

Anshuman Suri and Pratyush Maini wrote a blog about the EMNLP 2024 best paper award winner: Reassessing EMNLP 2024’s Best Paper: Does Divergence-Based Calibration for Membership Inference Attacks Hold Up?.

As we explored in Do Membership Inference Attacks Work on Large Language Models?, to test a membership inference attack it is essentail to have a candidate set where the members and non-members are from the same distribution. If the distributions are different, the ability of an attack to distinguish members and non-members is indicative of distribution inference, not necessarily membership inference.

The post describes experiments showing that the PatentMIA used in the EMNLP paper provides a false measure of membership inference.