I gave a short talk on explanability at the Virginia Journal of Social Policy and the Law Symposium on Artificial Intelligence at UVA Law School, 21 February 2025.
Can we explain AI model outputs? (PDF)
There’s an article about the event in the Virginia Law Weekly:
Law School Hosts LawTech Events, 26 February 2025.
I had a chance to talk (over zoom) about visual cryptography to students in an English class in a French high school in Spain!

School Website Post
Anshuman Suri and Pratyush Maini wrote a blog about the EMNLP 2024 best paper award winner: Reassessing EMNLP 2024’s Best Paper: Does Divergence-Based Calibration for Membership Inference Attacks Hold Up?.
As we explored in Do Membership Inference Attacks Work on Large Language Models?, to test a membership inference attack it is essentail to have a candidate set where the members and non-members are from the same distribution. If the distributions are different, the ability of an attack to distinguish members and non-members is indicative of distribution inference, not necessarily membership inference.
The post describes experiments showing that the PatentMIA used in the EMNLP paper provides a false measure of membership inference.