Ask/Vote on Questions

Ask your questions, and vote on others, on sli.do.

Overview

Recent studies show that many NLP systems are sensitive and vulnerable to a small perturbation of inputs and do not generalize well across different datasets. This lack of robustness derails the use of NLP systems in real-world applications. This tutorial aims at bringing awareness of practical concerns about NLP robustness. It targets NLP researchers and practitioners who are interested in building reliable NLP systems. In particular, we will review recent studies on analyzing the weakness of NLP systems when facing adversarial inputs and data with a distribution shift. We will provide the audience with a holistic view of

  1. how to use adversarial examples to examine the weakness of NLP models and facilitate debugging;
  2. how to enhance the robustness of existing NLP models and defense against adversarial inputs;
  3. how the consideration of robustness affects the real-world NLP applications used in our daily lives.

We will conclude the tutorial by outlining future research directions in this area.

Speakers

Kai-Wei Chang
UCLA
He He
NYU
Robin Jia
USC
Sameer Singh
UC Irvine

Outline

  • Motivation and Overview
  • Finding Lack of Robustness (Attacks)
    1. Writing Challenging Examples
    2. Generating Adversarial Examples
    3. Adversarial Trigger and Text Generation
    4. Training Time Attack
  • Making Models Robust (Defenses)
    1. Robustness to Spurious Correlations
    2. Adversarial Training for Defense
    3. Certified Robustness in NLP
    4. Test time-defense: detecting adversarial examples
  • Conclusion, Future Directions, and Discussion

Slides