EMNLP 21 Tutorial on Robust NLP
Ask/Vote on Questions
Ask your questions, and vote on others, on sli.do.
Overview
Recent studies show that many NLP systems are sensitive and vulnerable to a small perturbation of inputs and do not generalize well across different datasets. This lack of robustness derails the use of NLP systems in real-world applications. This tutorial aims at bringing awareness of practical concerns about NLP robustness. It targets NLP researchers and practitioners who are interested in building reliable NLP systems. In particular, we will review recent studies on analyzing the weakness of NLP systems when facing adversarial inputs and data with a distribution shift. We will provide the audience with a holistic view of
- how to use adversarial examples to examine the weakness of NLP models and facilitate debugging;
- how to enhance the robustness of existing NLP models and defense against adversarial inputs;
- how the consideration of robustness affects the real-world NLP applications used in our daily lives.
We will conclude the tutorial by outlining future research directions in this area.
Speakers
Outline
- Motivation and Overview
- Finding Lack of Robustness (Attacks)
- Writing Challenging Examples
- Generating Adversarial Examples
- Adversarial Trigger and Text Generation
- Training Time Attack
- Making Models Robust (Defenses)
- Robustness to Spurious Correlations
- Adversarial Training for Defense
- Certified Robustness in NLP
- Test time-defense: detecting adversarial examples
- Conclusion, Future Directions, and Discussion