Domain-Oriented Prefix-Tuning: Towards Efficient and Generalizable Fine-tuning for Zero-Shot Dialogue Summarization

Abstract

The most advanced abstractive dialogue summarizers lack generalization ability on new domains and the existing researches for domain adaptation in summarization generally rely on large-scale pre-trainings. To explore the lightweight fine-tuning methods for domain adaptation of dialogue summarization, in this paper, we propose an efficient and generalizable Domain-Oriented Prefix-tuning model, which utilizes a domain word initialized prefix module to alleviate domain entanglement and adopts discrete prompts to guide the model to focus on key contents of dialogues and enhance model generalization. We conduct zero-shot experiments and build domain adaptation benchmarks on two multi-domain dialogue summarization datasets, TODSum and QMSum. Adequate experiments and qualitative analysis prove the effectiveness of our methods.

Publication
NAACL 2022
Lulu Zhao
Lulu Zhao
Ph.D Student

Abstractive Dialogue Summarization, Relation Extraction

Fujia Zheng
Fujia Zheng
Postgraduate Student

Dialogue Summarization

Weihao Zeng
Weihao Zeng
Postgraduate Student
Keqing He
Postgraduate Student

Dialogue System, Summarization, Pre-training Language Model

Weiran Xu
Weiran Xu
Associate Professor, Master Supervisor, Ph.D Supervisor

Information Retrieval, Pattern Recognition, Machine Learning

Yanan Wu
Yanan Wu
Postgraduate Student

Spoken Language Understading