Our research seeks to empower individuals and organizations to control
how their data is used. We use techniques from cryptography,
programming languages, machine learning, operating systems, and other
areas to both understand and improve the privacy and security of
computing as practiced today, and as envisioned in the future. A major
current focus is on adversarial machine learning.
The Docket (UVA Law News) has an article about the AI Law class I’m helping Tom Nachbar teach:
New Classes Explore Promise and Predicaments of Artificial Intelligence
Attorneys-in-Training Learn About Prompts, Policies and Governance
The Docket, 17 March 2025
Nachbar teamed up with David Evans, a professor of computer science at UVA, to teach the course, which, he said, is “a big part of what makes this class work.”
“This course takes a much more technical approach than typical law school courses do. We have the students actually going in, creating their own chatbots — they’re looking at the technology underlying generative AI,” Nachbar said. Better understanding how AI actually works, Nachbar said, is key in training lawyers to handle AI-related litigation in the future.
“I want my students to have a solid understanding about what’s actually happening under the hood, as it were, so that when they confront a case, they know what kinds of questions to start asking,” he said.
Full Article
Tom and I will co-teach a jointly-listed Law and Computer Science AI Law class in the fall.
I gave a short talk at an NSF workshop to spark research collaborations between researchers in Taiwan and the United States. My talk was about work Hannah Cyberey is leading on steering the internal representations of LLMs:
Steering around Censorship
Taiwan-US Cybersecurity Workshop
Arlington, Virginia
3 March 2025
I gave a short talk on explanability at the Virginia Journal of Social Policy and the Law Symposium on Artificial Intelligence at UVA Law School, 21 February 2025.
Can we explain AI model outputs? (PDF)
There’s an article about the event in the Virginia Law Weekly:
Law School Hosts LawTech Events, 26 February 2025.
I had a chance to talk (over zoom) about visual cryptography to students in an English class in a French high school in Spain!

School Website Post
Anshuman Suri and Pratyush Maini wrote a blog about the EMNLP 2024 best paper award winner: Reassessing EMNLP 2024’s Best Paper: Does Divergence-Based Calibration for Membership Inference Attacks Hold Up?.
As we explored in Do Membership Inference Attacks Work on Large Language Models?, to test a membership inference attack it is essentail to have a candidate set where the members and non-members are from the same distribution. If the distributions are different, the ability of an attack to distinguish members and non-members is indicative of distribution inference, not necessarily membership inference.
The post describes experiments showing that the PatentMIA used in the EMNLP paper provides a false measure of membership inference.