Revisit input perturbation problems for llms: A unified robustness evaluation framework for noisy slot filling task

Abstract

We utilize a multi-level data augmentation method (character, word, and sentence levels) to construct a candidate data pool, and carefully design two ways of automatic task demonstration construction strategies (instance-level and entity-level) with various prompt templates. Our aim is to assess how well various robustness methods of LLMs perform in real-world noisy scenarios. The experiments have demonstrated that the current open-source LLMs generally achieve limited perturbation robustness performance. Based on these experimental observations, we make some forward-looking suggestions to fuel the research in this direction..

Publication
NLPCC 2023
Guanting Dong
Guanting Dong
Postgraduate Student

Spoken Language Understading and related applications

Jinxu Zhao
Jinxu Zhao
Postgraduate Student
Tingfeng Hui
Tingfeng Hui
Postgraduate Student
Daichi Guo
Daichi Guo
Postgraduate Student

Debiaes, Class Imbalance

Zhuoma GongQue
Zhuoma GongQue
Postgraduate Student
Keqing He
Postgraduate Student

Dialogue System, Summarization, Pre-training Language Model

Zechen Wang
Zechen Wang
Postgraduate Student
Weiran Xu
Weiran Xu
Associate Professor, Master Supervisor, Ph.D Supervisor

Information Retrieval, Pattern Recognition, Machine Learning