Balancing Tradeoffs between Fickleness and Obstinacy in NLP Models

Post by Hannah Chen. Our work on balanced adversarial training looks at how to train models that are robust to two different types of adversarial examples: Hannah Chen, Yangfeng Ji, David Evans. Balanced Adversarial Training: Balancing Tradeoffs between Fickleness and Obstinacy in NLP Models. In The 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), Abu Dhabi, 7-11 December 2022. [ArXiv] Adversarial Examples At the broadest level, an adversarial example is an input crafted intentionally to confuse a model.

Read More…

Pointwise Paraphrase Appraisal is Potentially Problematic

Hannah Chen presented her paper on Pointwise Paraphrase Appraisal is Potentially Problematic at the ACL 2020 Student Research Workshop: The prevailing approach for training and evaluating paraphrase identification models is constructed as a binary classification problem: the model is given a pair of sentences, and is judged by how accurately it classifies pairs as either paraphrases or non-paraphrases. This pointwise-based evaluation method does not match well the objective of most real world applications, so the goal of our work is to understand how models which perform well under pointwise evaluation may fail in practice and find better methods for evaluating paraphrase identification models.

Read More…

All Posts by Category or Tags.