Security and Privacy Research at the University of Virginia

Our research seeks to empower individuals and organizations to control how their data is used. We use techniques from cryptography, programming languages, machine learning, operating systems, and other areas to both understand and improve the privacy and security of computing as practiced today, and as envisioned in the future. A major current focus is on adversarial machine learning.

Everyone is welcome at our research group meetings. To get announcements, join our Slack Group (any @virginia.edu email address can join themsleves, or email me to request an invitation).

Recent Posts

Algorithmic Accountability and the Law

Brink News (a publication of The Atlantic) published an essay I co-authored with Tom Nachbar (UVA Law School) on how the law views algorithmic accountability and the limits of what measures are permitted under the law to adjust algorithms to counter inequity:

Algorithms Are Running Foul of Anti-Discrimination Law
Tom Nachbar and David Evans
Brink, 7 December 2020



Computing systems that are found to discriminate on prohibited bases, such as race or sex, are no longer surprising. We’ve seen hiring systems that discriminate against women image systems that are prone to cropping out dark-colored faces and credit scoring systems that discriminate against minorities.

Anyone considering deploying an algorithm that impacts humans needs to understand the potential for such algorithms to discriminate. But what to do about it is much less clear.

The Difficulty of Mandating Fairness

There are no simple ways to ensure that an algorithm doesn’t discriminate, and many of the proposed fixes run the risk of violating anti-discrimination law. In particular, approaches that seek to optimize computing systems for various notions of fairness, especially those concerned with the distribution of outcomes along legally protected criteria such as race or sex, are in considerable tension with U.S. anti-discrimination law.

Although many arguments about discriminatory algorithms are premised on unfair outcomes, such notions have limited relevance under U.S. law.

For the most part, U.S. law lacks a notion of fairness.

Legal rules generally call upon more specific notions than fairness, even if they are connected to fairness. Thus, in the context of illegal employment discrimination (which we will use as our motivating example), instead of mandating fairness, U.S. law generally prohibits conduct that discriminates on the basis of protected characteristics, like race and sex.

Process and Intent Matter

Moreover, the law does not generally regulate behavior based on outcomes; what matters is the intent and process that led to the outcome. In the case of U.S. employment discrimination law, those rules of intent and process are contained in two types of protections against discrimination: disparate treatment and disparate impact.

An employer is liable for disparate treatment when there is either explicit or intentional discrimination. Disparate treatment protections prohibit the use of overt racial classifications, but also provide liability for hidden but intentional discrimination, such as cases where a victim can show they are a member of a racial minority and were qualified, but were rejected, and the employer cannot provide any nondiscriminatory justification for the decision, such as the case of McDonnell Douglas Corp. v. Green.

Under disparate impact, an employer following a process that has a statistically observable negative impact on a protected group is not necessarily liable. Instead, the disparate outcomes transfer the burden to the employer to show that the decision-making process is justified based on valid criteria, as with the case of Griggs v. Duke Power Co..

If you think those two approaches sound confusingly similar, you’re not alone. Disparate impact liability frequently mirrors disparate treatment liability in that the disparate outcome itself is not enough to establish a violation.

The role of disparate outcomes is to shift the burden to the employer to provide a non-discriminatory reason for its decision-making process. What matters legally is not so much the outcome, as the intention and process behind it.

Correcting for past racial disparities will require a more sophisticated and deep-seated approach than simply altering algorithms.

The Law Doesn’t Care Whether Decisions Are Made by Algorithms or Humans

When outcomes are based on the output of some algorithm, the employer still needs to justify that the decisions it makes are based on valid criteria. The law doesn’t care whether decisions are made by algorithms or by human decision-makers; what matters is the reason for the decision. It is up to the humans responsible to explain that reasoning.

Although many have argued for increased algorithmic transparency, even the most transparent algorithms cannot really explain why they made the decisions they did. This presents a major challenge for discrimination law because, in discrimination law, the “why” matters.

Algorithmically generated explanations can help but, by themselves, cannot answer the legal “why” question. Even an interpretable model that appears to have no discriminatory intent is not necessarily non-discriminatory. The rules it has learned could have been influenced by selecting training data that disadvantages a particular group, and the features it uses could be determined in ways that are inherently discriminatory.

To satisfy discrimination law, it is the process and the intent that matter, and explanations an algorithm itself produces are insufficient to establish that intent. Indeed, explanations of the intent of algorithms should be viewed with the same skepticism that we have when humans attempt to describe their own decision-making processes. Just because an explanation is provided does not mean it should be believed.

Optimizing an Algorithm for Fairness May Be Discriminatory

This is a particular problem for methods used by systems designers, who frequently seek to optimize for particular outcomes. An optimization approach does not fit well with legal requirements. When algorithm designers focus on fairness as a property to be optimized, they ignore the legal requirements of anti-discrimination.

Discrimination law does not operate through optimization, because discrimination (or anti-discrimination) is not something to be optimized. Anti-discrimination is a side constraint on a decision-making process, not its principal goal (e.g., to find good employees).

Systems should be designed to optimize for their principal goal, with the constraint of avoiding discrimination (in intent or process) while doing so. Attempts to produce outcomes that seem less discriminatory might themselves constitute illegal discrimination. The 2009 U.S. Supreme Court case of Ricci v. DeStafano provides a prime example of that tension. In the case, the New Haven Fire Department used an examination to determine which firefighters should be promoted to lieutenant. When that test produced a result that was racially skewed compared to the population of firefighters, the city (in part because they were concerned about disparate impact liability), invalidated the results of the test. White firefighters sued, claiming the city’s response in rejecting the test was itself disparate treatment, since the motivation for rejecting the test was to produce a different racial outcome, and the Supreme Court agreed.

Algorithms Alone Cannot Save Us

Although reducing racial disparity is a laudable goal, the law substantially limits the discretion of both employers and system designers in engineering for equitable outcomes. Racially disparate outcomes may seem unfair, and they might even be evidence of underlying illegal discrimination, but the law neither deputizes systems designers to operationalize their notions of what are racially fair outcomes, nor immunizes them for acts of discrimination undertaken in order to correct racial disparities.

Correcting for past racial disparities will require a more sophisticated and deep-seated approach than simply altering algorithms to produce outcomes optimized toward some fairness criterion.

Algorithms alone are neither the source nor the solution to our problems. Solving them will require fundamental change, and the real question is whether we as a society — not just our algorithms — are prepared to do that work.


Microsoft Security Data Science Colloquium: Inference Privacy in Theory and Practice

Here are the slides for my talk at the Microsoft Security Data Science Colloquium:
When Models Learn Too Much: Inference Privacy in Theory and Practice [PDF]

The talk is mostly about Bargav Jayaraman’s work (with Katherine Knipmeyer, Lingxiao Wang, and Quanquan Gu) on evaluating privacy:


Fact-checking Donald Trump’s tweet firing Christopher Krebs

I was a source for thie “Pants on Fire!” fact check by PolitiFact on Donald Trump’s tweet that fired Christopher Krebs claiming that “The recent statement by Chris Krebs on the security of the 2020 Election was highly inaccurate, in that there were massive improprieties and fraud - including dead people voting, Poll Watchers not allowed into polling locations, “glitches” in the voting machines which changed…”

PolitiFact: Fact-checking Donald Trump’s tweet firing Christopher Krebs, 18 November 2020

David Evans, a professor of computer science at the University of Virginia, told PolitiFact that he signed the joint statement because “there is no substance … no credible specifics, and no evidence to support any of the claims” of a rigged election.

“It is always difficult to prove something did not occur, which is why people who work in security are so careful to avoid strong statements,” Evans said. “But in this case, because of the size of the margin, all of the security measures that were in place and worked as intended, and the lack of any evidence of anything fraudulent happening, one can be highly confident that there is no credible possibility that the results of the election are invalid.”

The expert’s statement (for which I am one of 59 signers) is here: Scientists say no credible evidence of computer fraud in the 2020 election outcome, but policymakers must work with experts to improve confidence.

It was covered in this article: New York Times, Election Security Experts Contradict Trump’s Voting Claims, 16 November 2020.


Voting Security

I was interviewed for a local news story by Daniel Grimes on election security: UVA cybersecurity expert: Virginia is one of the safer states to cast a ballot, NBC 29 News, 21 October 2020.


Merlin, Morgan, and the Importance of Thresholds and Priors

Post by Katherine Knipmeyer

Machine learning poses a substantial risk that adversaries will be able to discover information that the model does not intend to reveal. One set of methods by which consumers can learn this sensitive information, known broadly as membership inference attacks, predicts whether or not a query record belongs to the training set. A basic membership inference attack involves an attacker with a given record and black-box access to a model who tries to determine whether said record was a member of the model’s training set.

Unlike much of the existing research on the membership inference, though, these particular results focus on what are considered “realistic assumptions,” including conditions with skewed priors (wherein members only make up a small fraction of the candidate pool) and conditions with adversaries that select accuracy-improving inference thresholds based on specific attack goals. These new assumptions help to answer the question of how differential privacy can be implemented to provide meaningful privacy guarantees in practice.

Threshold Selection

In order to classify a record as either a member or a non-member, there must be a threshold that converts a real number output from a test into a Boolean. We develop a procedure to select a threshold, φ, that allows the adversary to achieve as much privacy leakage as possible while staying beneath a maximum false positive rate, α.

This selection procedure can be applied to any membership inference attack, including Yeom’s attack. The original version of this attack classifies a record as a member if its per-instance-loss is less than the expected training loss, whereas this new approach selects members based on a threshold φ, which can be set to target a particular false positive rate.

The Merlin Attack

In addition to this new selection procedure, we introduce a new attack known as Merlin, which stands for MEasuring Relative Loss In Neighborhood. Instead of per-instance-loss, this attack uses the direction of change of per-instance-loss when the record is slightly perturbed with noise. Merlin operates based on the intuition that, as a result of overfitting, member records are more likely to be near local minima than non-member records. This suggests that for members, loss is more likely to increase at perturbed points near the original, whereas it is equally likely to increase or decrease for non-members. For each record, a small amount of random Gaussian noise is added and the change of loss direction is recorded. This process is repeated multiple times and Merlin infers membership based on the fraction of times the loss increases.

The Morgan Attack

Since Yeom and Merlin use different information to make their membership inferences, they do not always identify the same records as members; some members are more vulnerable to one attack than the other. Visualizing a combination of the attacks’ results suggests that by eliminating the results with a very low per-instance-loss, a combination of the two may produce an improved PPV. The intuition here is that extremely low per-instance-losses may result in Merlin’s identification of a local minimum where there is in fact a near global minimum (which is much less strongly correlated with membership).

The Morgan (Measuring lOss, Relatively Greater Around Neighborhood) attack uses three different thresholds: a lower threshold on per-instance loss (φL), an upper threshold on per-instance loss (φU), and a threshold on the ratio as used by Merlin (φM). If a record has a per-instance-loss that falls between φL and φU, and has a Merlin ratio of at least φM, Morgan identifies it as a member.


The figure shows the per-instance loss and Merlin ratio for Purchase-100X (and expanded version of the Purchase-100 dataset that we created for our experiments). Members and nonmembers are denoted by orange and purple points respectively. The boxes show the thresholds found by the threshold selection process (without access to the training data, but with the same data distribution), and illustrate the regions where members are identified by Morgan with very high confidence (PPV ∼1). (See paper for details, and more result.)

Imbalanced Priors

Previous work on membership inference attacks assumes a candidate pool where half of the candidates are members. For most settings, especially ones where there is a serious privacy risk for an individual of being identified as a dataset member, this assumption is unrealistic. It is important to understand how well inference attacks work when the adversary’s candidate pool has a different prior probability of being amember.

Here, the candidate pool from which the attacker attempts to select members has γ times more non-member records than member records. As shown above, even in situations that other papers do not consider, wherein there are many times more non-members than members, attacks are able to attain a high rate of positively-identified members.

Conclusion

The Merlin and Morgan attacks can reliably identify members even in situations with imbalanced priors where other attacks fail to show meaningful inference risk.

There remains a large gap between what can be guaranteed using differential privacy methods, and what can be inferred using known inference attacks. This means better inference attacks may exist, and our results show that there are concrete ways to improve attacks (e.g., our threshold-selection procedure) and to incorporate more information to improve attacks. We are especially interested in attacks that produce extremely high PPVs, even if this is only for a small fraction of candidates, since for most scenarios this is where the most serious privacy risks lie.

Full paper: Bargav Jayaraman, Lingxiao Wang, Katherine Knipmeyer, Quanquan Gu, David Evans. Revisiting Membership Inference Under Realistic Assumptions (arXiv).

Code: https://github.com/bargavj/EvaluatingDPML