自然语言处理徐蔚然老师研究组
自然语言处理徐蔚然老师研究组
主页
主研方向
成员
论文
项目
专利
博客
去向
联系
浅色
深色
自动
中文 (简体)
中文 (简体)
English
所有论文
类型
会议文章
期刊文章
日期
2024
2023
2022
2021
2020
2018
CS-Bench:A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery
Computer Science (CS) stands as a testament to the intricacies of human intelligence, profoundly advancing the development of …
宋晓帅
,
刁沐熙
,
董冠霆
,
王正阳
,
付雨佳
,
RunqiQiao
,
王哲旭
,
傅大源
,
吴黄璇
,
梁斌
,
曾伟豪
,
王业捷
,
公却卓玛
,
于嘉宁
,
QiunaTan
,
徐蔚然
PDF
引用
代码
DOI
BootTOD:Bootstrap Task-oriented Dialogue Representations by Aligning Diverse Responses
Pre-trained language models have been successful in many scenarios. However, their usefulness in task-oriented dialogues is limited due …
曾伟豪
,
何可清
,
王业捷
,
傅大源
,
徐蔚然
PDF
引用
DOI
COLING 2024
DivTOD:Unleashing the Power of LLMs for Diversifying Task-Oriented Dialogue Representations
Language models pre-trained on general text have achieved impressive results in diverse fields. Yet, the distinct linguistic …
曾伟豪
,
傅大源
,
何可清
,
王业捷
,
徐钰凯
,
徐蔚然
PDF
引用
DOI
NAACL 2024
Faceptor:A Generalist Model for Face Perception
With the comprehensive research conducted on various face analysis tasks, there is a growing interest among researchers to develop a …
秦立雄
,
MeiWang
,
XuannanLiu
,
YuhangZhang
,
WeiDeng
,
宋晓帅
,
徐蔚然
,
WeihongDeng
PDF
引用
代码
DOI
ECCV 2024
PreAct:Predicting Future in ReAct Enhances Agent's Planning Ability
Addressing the discrepancies between predictions and actual outcomes often aids individuals in expanding their thought processes and …
傅大源
,
JianzhaoHuang
,
SiyuanLu
,
董冠霆
,
王业捷
,
何可清
,
徐蔚然
PDF
引用
代码
DOI
Multi-Perspective Consistency Enhances Confidence Estimation in Large Language Models
In the deployment of large language models (LLMs), accurate confidence estimation is critical for assessing the credibility of model …
王霈
,
王业捷
,
刁沐熙
,
何可清
,
董冠霆
,
徐蔚然
PDF
引用
DOI
DolphCoder:Echo-Locating Code Large Language Models with Diverse and Multi-Objective Instruction Tuning
Code Large Language Models (Code LLMs) have demonstrated outstanding performance in code-related tasks. Several instruction tuning …
王业捷
,
何可清
,
董冠霆
,
王霈
,
曾伟豪
,
刁沐熙
,
牟宇滔
,
MengdiZhang
,
JingangWang
,
XunliangCai
,
徐蔚然
PDF
引用
DOI
ACL 2024
Knowledge Editing on Black-box Large Language Models
Knowledge editing (KE) aims to efficiently and precisely modify the behavior of large language models (LLMs) to update specific …
宋晓帅
,
王正阳
,
何可清
,
董冠霆
,
牟宇滔
,
赵金旭
,
徐蔚然
PDF
引用
代码
DOI
Semantic Parsing by Large Language Models for Intricate Updating Strategies of Zero-Shot Dialogue State Tracking
Zero-shot Dialogue State Tracking (DST) addresses the challenge of acquiring and annotating task-oriented dialogues, which can be time …
YuxiangWu
,
董冠霆
,
徐蔚然
PDF
代码
DOI
EMNLP 2023
Revisit input perturbation problems for llms: A unified robustness evaluation framework for noisy slot filling task
We utilize a multi-level data augmentation method (character, word, and sentence levels) to construct a candidate data pool, and …
董冠霆
,
赵金旭
,
回亭风
,
郭岱驰
,
WenlongWan
,
BoqiFeng
,
YueyanQiu
,
公却卓玛
,
何可清
,
王泽晨
,
徐蔚然
PDF
代码
DOI
NLPCC 2023
Large Language Models Meet Open-World Intent Discovery and Recognition: An Evaluation of ChatGPT
The tasks of out-of-domain (OOD) intent discovery and generalized intent discovery (GID) aim to extend a closed intent classifier to …
宋晓帅
,
何可清
,
王霈
,
董冠霆
,
牟宇滔
,
JingangWang
,
YunsenXian
,
XunliangCai
,
徐蔚然
PDF
代码
DOI
EMNLP 2023
DemoNSF: A Multi-task Demonstration-based Generative Framework for Noisy Slot Filling Task
Recently, prompt-based generative frameworks have shown impressive capabilities in sequence labeling tasks. However, in practical …
董冠霆
,
回亭风
,
公却卓玛
,
赵金旭
,
郭岱驰
,
GangZhao
,
何可清
,
徐蔚然
PDF
代码
DOI
EMNLP 2023
Continual Generalized Intent Discovery: Marching Towards Dynamic and Open-world Intent Recognition
In a practical dialogue system, users may input out-of-domain (OOD) queries. The Generalized Intent Discovery (GID) task aims to …
宋晓帅
,
牟宇滔
,
何可清
,
YueyanQiu
,
王霈
,
徐蔚然
PDF
代码
DOI
EMNLP 2023
Bridging the KB-Text Gap: Leveraging Structured Knowledge-aware Pre-training for KBQA
Knowledge Base Question Answering (KBQA) aims to answer natural language questions with factual information such as entities and …
董冠霆
,
Rumei Li
,
SiruiWang
,
YupengZhang
,
YunsenXian
,
徐蔚然
PDF
代码
DOI
CIKM 2023
APP:Adaptive Prototypical Pseudo-Labeling for Few-shot OOD Detection
Detecting out-of-domain (OOD) intents from user queries is essential for a task-oriented dialogue system. Previous OOD detection …
王霈
,
何可清
,
牟宇滔
,
宋晓帅
,
吴亚楠
,
JingangWang
,
YunsenXian
,
XunliangCai
,
徐蔚然
PDF
DOI
EMNLP 2023
A multi-task semantic decomposition framework with task-specific pre-training for few-shot ner
The objective of few-shot named entity recognition is to identify named entities with limited labeled instances. Previous works have …
董冠霆
,
王泽晨
,
赵金旭
,
GangZhao
,
郭岱驰
,
傅大源
,
回亭风
,
曾晨
,
何可清
,
李雪峰
,
王礼文
,
XinyueCui
,
徐蔚然
PDF
代码
DOI
CIKM 2023
Value type-the bridge to a better DST model
Value type of the slots can provide lots of useful information for DST tasks. However, it has been ignored in most previous works. In …
高琪翔
,
MingyangSun
,
牟宇滔
,
曾晨
,
徐蔚然
PDF
DOI
ACL 2023
Seen to Unseen: Exploring Compositional Generalization of Multi-Attribute Controllable Dialogue Generation
Existing controllable dialogue generation work focuses on the single-attribute control and lacks generalization capability to …
曾伟豪
,
赵璐璐
,
何可清
,
耿若彤
,
JingangWang
,
WeiWu
,
徐蔚然
PDF
DOI
ACL 2023
Revisit Out-Of-Vocabulary Problem For Slot Filling: A Unified Contrastive Framework With Multi-Level Data Augmentations
In real dialogue scenarios, the existing slot filling model, which tends to memorize entity patterns, has a significantly reduced …
郭岱驰
,
董冠霆
,
傅大源
,
YuxiangWu
,
曾晨
,
回亭风
,
王礼文
,
李雪峰
,
王泽晨
,
何可清
,
XinyueCui
,
徐蔚然
PDF
DOI
ICASSP 2023
Generative zero-shot prompt learning for cross-domain slot filling with inverse prompting
Zero-shot cross-domain slot filling aims to transfer knowledge from the labeled source domain to the unlabeled target domain. Existing …
李雪峰
,
王礼文
,
董冠霆
,
何可清
,
JinzhengZhao
,
雷浩
,
JiachiLiu
,
徐蔚然
PDF
代码
DOI
ACL 2023
FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for Task-Oriented Dialogue
Pre-trained language models based on general text enable huge success in the NLP scenario. But the intrinsical difference of linguistic …
曾伟豪
,
何可清
,
王业捷
,
曾晨
,
JingangWang
,
YunsenXian
,
徐蔚然
PDF
代码
DOI
ACL 2023
Decoupling Pseudo Label Disambiguation and Representation Learning for Generalized Intent Discovery
Generalized intent discovery aims to extend a closed-set in-domain intent classifier to an open-world intent set including in-domain …
牟宇滔
,
宋晓帅
,
何可清
,
曾晨
,
王霈
,
JingangWang
,
YunsenXian
,
徐蔚然
PDF
代码
DOI
ACL 2023
A Prototypical Semantic Decoupling Method via Joint Contrastive Learning for Few-Shot Named Entity Recognition
Few-shot named entity recognition (NER) aims at identifying named entities based on only few labeled instances. Most existing …
董冠霆
,
王泽晨
,
王礼文
,
郭岱驰
,
傅大源
,
YuxiangWu
,
曾晨
,
李雪峰
,
回亭风
,
何可清
,
XinyueCui
,
高琪翔
,
徐蔚然
PDF
DOI
ICASSP 2023
Watch the Neighbors: A Unified K-Nearest Neighbor Contrastive Learning Framework for OOD Intent Discovery
Discovering out-of-domain (OOD) intent is important for developing new skills in task-oriented dialogue systems. The key challenges lie …
牟宇滔
,
何可清
,
王霈
,
吴亚楠
,
JingangWang
,
WeiWu
,
徐蔚然
EMNLP 2022
UniNL: Aligning Representation Learning with Scoring Function for OOD Detection via Unified Neighborhood Learning
Detecting out-of-domain (OOD) intents from user queries is essential for avoiding wrong operations in task-oriented dialogue systems. …
牟宇滔
,
王霈
,
何可清
,
吴亚楠
,
JingangWang
,
WeiWu
,
徐蔚然
EMNLP 2022
Exploiting domain-slot related keywords description for Few-Shot Cross-Domain Dialogue State Tracking
Collecting dialogue data with domain-slot-value labels for dialogue state tracking (DST) could be a costly process. In this paper, we …
高琪翔
,
董冠霆
,
牟宇滔
,
王礼文
,
曾晨
,
郭岱驰
,
MingyangSun
,
徐蔚然
EMNLP 2022
Entity-level Interaction via Heterogeneous Graph for Multimodal Named Entity Recognition
Multimodal Named Entity Recognition (MNER) faces two specific challenges: 1) How to capture useful entity-related visual information; …
GangZhao
,
董冠霆
,
YidongShi
,
HaolongYan
,
徐蔚然
,
SiLi
EMNLP 2022 Findings
Disentangling Confidence Score Distribution for Out-of-Domain Intent Detection with Energy-Based Learning
Detecting Out-of-Domain (OOD) or unknown intents from user queries is essential in a task-oriented dialog system. Traditional …
吴亚楠
,
曾致远
,
何可清
,
牟宇滔
,
王霈
,
严渊蒙
,
徐蔚然
EMNLP 2022 workshop
Semi-Supervised Knowledge-Grounded Pre-training for Task-Oriented Dialog Systems
Recent advances in neural approaches greatly improve task-oriented dialogue (TOD) systems which assist users to accomplish their goals. …
曾伟豪
,
何可清
,
王泽晨
,
傅大源
,
董冠霆
,
耿若彤
,
王霈
,
JingangWang
,
ChaoboSun
,
WeiWu
,
徐蔚然
PDF
引用
代码
EMNLP2022 workshop (SereTOD)
PSSAT: A Perturbed Semantic Structure Awareness Transferring Method for Perturbation-Robust Slot Filling
Most existing slot filling models tend to memorize inherent patterns of entities and corresponding contexts from training data. …
董冠霆
,
郭岱驰
,
王礼文
,
李雪峰
,
王泽晨
,
曾晨
,
何可清
,
JinzhengZhao
,
雷浩
,
XinyueCui
,
YiHuang
,
JunlanFeng
,
徐蔚然
PDF
引用
COLING 2022
Generalized Intent Discovery: Learning from Open World Dialogue System
Traditional intent classification models are based on a pre-defined intent set and only recognize limited in-domain (IND) intent …
牟宇滔
,
何可清
,
吴亚楠
,
王霈
,
JingangWang
,
WeiWu
,
YiHuang
,
JunlanFeng
,
徐蔚然
PDF
引用
代码
COLING 2022
Distribution Calibration for Out-of-Domain Detection with Bayesian Approximation
Out-of-Domain (OOD) detection is a key component in a task-oriented dialog system, which aims to identify whether a query falls outside …
吴亚楠
,
曾致远
,
何可清
,
牟宇滔
,
王霈
,
徐蔚然
PDF
引用
代码
COLING 2022
ADPL: Adversarial Prompt-based Domain Adaptation for Dialogue Summarization with Knowledge Disentanglement
Traditional dialogue summarization models rely on a large-scale manually-labeled corpus, lacking generalization ability to new domains, …
赵璐璐
,
郑馥嘉
,
曾伟豪
,
何可清
,
耿若彤
,
HuixingJiang
,
WeiWu
,
徐蔚然
SIGIR 2022
Revisit Overconfidence for OOD Detection: Reassigned Contrastive Learning with Adaptive Class-dependent Threshold
Detecting Out-of-Domain (OOD) or unknown intents from user queries is essential in a task-oriented dialog system. A key challenge of …
吴亚楠
,
何可清
,
严渊蒙
,
高琪翔
,
曾致远
,
郑馥嘉
,
赵璐璐
,
HuixingJiang
,
WeiWu
,
徐蔚然
NAACL 2022
Domain-Oriented Prefix-Tuning: Towards Efficient and Generalizable Fine-tuning for Zero-Shot Dialogue Summarization
The most advanced abstractive dialogue summarizers lack generalization ability on new domains and the existing researches for domain …
赵璐璐
,
郑馥嘉
,
曾伟豪
,
何可清
,
徐蔚然
,
HuixingJiang
,
WeiWu
,
吴亚楠
NAACL 2022
Disentangled Knowledge Transfer for OOD Intent Discovery with Unified Contrastive Learning
Discovering Out-of-Domain(OOD) intents is essential for developing new skills in a task-oriented dialogue system. The key challenge is …
牟宇滔
,
何可清
,
吴亚楠
,
曾致远
,
徐红
,
HuixingJiang
,
WeiWu
,
徐蔚然
ACL 2022
A Robust Contrastive Alignment Method For Multi-Domain Text Classification
Multi-domain text classification can automatically classify texts in various scenarios. Due to the diversity of human languages, texts …
李雪峰
,
雷浩
,
王礼文
,
董冠霆
,
JinzhengZhao
,
JiachiLiu
,
徐蔚然
,
ChunyunZhang
ICASSP 2022
Large-Scale Relation Learning for Question Answering over Knowledge Bases with Pre-trained Language Models
The key challenge of question answering over knowledge bases (KBQA) is the inconsistency between the natural language questions and the …
严渊蒙
,
RumeiLi
,
SiruiWang
,
HongzhiZhang
,
ZanDaoguang
,
FuzhengZhang
,
WeiWu
,
徐蔚然
PDF
引用
代码
EMNLP 2021
Gradient-Based Adversarial Factual Consistency Evaluation for Abstractive Summarization
Neural abstractive summarization systems have gained significant progress in recent years. However, excessive abstractiveness …
曾致远
,
JiazeChen
,
徐蔚然
,
LeiLi
PDF
引用
代码
EMNLP 2021
Give the Truth:Incorporate Semantic Slot into Abstractive Dialogue Summarization
Abstractive dialogue summarization suffers from a lots of factual errors, which are due to scattered salient elements in the …
赵璐璐
,
曾伟豪
,
徐蔚然
,
郭军
PDF
引用
EMNLP 2021
Bridge to Target Domain by Prototypical Contrastive Learning and Label Confusion:Re-explore Zero-Shot Learning for Slot Filling
Zero-shot cross-domain slot filling alleviates the data dependence in the case of data scarcity in the target domain, which has aroused …
王礼文
,
李雪峰
,
JiachiLiu
,
何可清
,
严渊蒙
,
徐蔚然
PDF
引用
代码
EMNLP 2021
A Finer-grain Universal Dialogue Semantic Structures based Model For Abstractive Dialogue Summarization
Although abstractive summarization models have achieved impressive results on document summarization tasks, their performance on …
雷粤杰
,
郑馥嘉
,
严渊蒙
,
何可清
,
徐蔚然
PDF
引用
代码
EMNLP 2021
Scheduled Dialog Policy Learning:An Automatic Curriculum Learning Framework for Task-oriented Dialog System
刘思宏
,
JinchaoZhang
,
KeqingHe,
,
徐蔚然
,
JieZhou
PDF
引用
DOI
ACL 2021
Novel Slot Detection:A Benchmark for Discovering Unknown Slot Types in the Task-Oriented Dialogue System
Existing slot filling models can only recognize pre-defined in-domain slot types from a limited slot set. In the practical application, …
吴亚楠
,
曾致远
,
何可清
,
徐红
,
严渊蒙
,
HuixingJiang
,
徐蔚然
PDF
引用
代码
ACL 2021
Modeling Discriminative Representations for Out-of-Domain Detection with Supervised Contrastive Learning
Detecting Out-of-Domain (OOD) or unknown intents from user queries is essential in a task oriented dialog system. A key challenge of …
曾致远
,
何可清
,
严渊蒙
,
刘子君
,
吴亚楠
,
徐红
,
HuixingJiang
,
徐蔚然
PDF
引用
代码
ACL 2021
ConSERT:A Contrastive Framework for Self-Supervised Sentence Representation Transfer
Learning high-quality sentence representations benefits a wide range of natural language processing tasks. Though BERT-based …
严渊蒙
,
RumeiLi
,
SiruiWang
,
FuzhengZhang
,
WeiWu
,
徐蔚然
PDF
引用
代码
ACL 2021
Hierarchical Speaker-aware Sequence-to-sequence Model for Dialogue Summarization
雷粤杰
,
严渊蒙
,
曾致远
,
何可清
,
XimingZhang
,
徐蔚然
PDF
引用
DOI
ICASSP 2021
Dynamically Disentangling Social Bias from Task-Oriented Representations with Adversarial Attack
Representation learning is widely used in NLP for a vast range of tasks. However, representations derived from text corpora often …
王礼文
,
严渊蒙
,
何可清
,
吴亚楠
,
徐蔚然
PDF
引用
代码
DOI
NAACL 2021
Adversarial Self-Supervised Learning for Out-of-Domain Detection
Detecting out-of-domain (OOD) intents is crucial for the deployed task-oriented dialogue system. Previous unsupervised OOD detection …
曾致远
,
何可清
,
严渊蒙
,
徐红
,
徐蔚然
PDF
引用
代码
DOI
NAACL 2021
Adversarial Generative Distance-Based Classifier for Robust Out-of-Domain Detection
Detecting out-of-domain (OOD) intents is critical in a task-oriented dialog system. Existing methods rely heavily on extensive manually …
曾致远
,
徐红
,
何可清
,
严渊蒙
,
刘思宏
,
刘子君
,
徐蔚然
PDF
引用
DOI
ICASSP 2021
Utilizing Graph Neural Networks to Improving Dialogue-based Relation Extraction
Relation extraction has been an active research interest in the field of Natural Language Processing (NLP). The past works primarily …
赵璐璐
,
徐蔚然
,
ShengGao
,
JunGuo
PDF
引用
DOI
Neurocomputing
From context-aware:to knowledge-aware Boosting OOV tokens recognition in slot tagging with background knowledge
Neural-based context-aware models for slot tagging tasks in language understanding have achieved state-of-the-art performance, …
何可清
,
严渊蒙
,
徐蔚然
PDF
引用
DOI
Neurocomputing
Improving Abstractive Dialogue Summarization with Graph Structures and Topic Words
Recently, people have been beginning paying more attention to the abstractive dialogue summarization task. Since the information flows …
赵璐璐
,
徐蔚然
,
JunGuo
PDF
引用
DOI
COLING 2020
Contrastive Zero-Shot Learning for Cross-Domain Slot Filling with Adversarial Attack
Zero-shot slot filling has widely arisen to cope with data scarcity in target domains. However, previous approaches often ignore …
何可清
,
JinchaoZhang
,
严渊蒙
,
徐蔚然
,
ChengNiu
,
JieZhou
PDF
引用
DOI
COLING 2020
A Deep Generative Distance-Based Classifier for Out-of-Domain Detection with Mahalanobis Space
Detecting out-of-domain (OOD) input intents is critical in the task-oriented dialog system. Different from most existing methods that …
徐红
,
何可清
,
严渊蒙
,
刘思宏
,
刘子君
,
徐蔚然
PDF
引用
代码
DOI
COLING 2020
Adversarial Semantic Decoupling for Recognizing Open-Vocabulary Slots
Open-vocabulary slots, such as file name, album name, or schedule title, significantly degrade the performance of neural-based slot …
严渊蒙
,
何可清
,
徐红
,
刘思宏
,
FanyuMeng
,
MinHu
,
徐蔚然
PDF
引用
代码
DOI
EMNLP 2020
CGTR:Convolution Graph Topology Representation for Document Ranking
Contextualized neural language models have gained much attention in Information Retrieval (IR) with its ability to achieve better text …
YuanyuanQi
,
JiayueZhang
,
YansongLiu
,
徐蔚然
,
JunGuo
PDF
引用
代码
DOI
CIKM 2020
Learning Label-Relational Output Structure for Adaptive Sequence Labeling
Sequence labeling is a fundamental task of natural language understanding. Recent neural models for sequence labeling task achieve …
何可清
,
严渊蒙
,
徐红
,
刘思宏
,
刘子君
,
徐蔚然
PDF
引用
DOI
IJCNN 2020
Adversarial Cross-Lingual Transfer Learning for Slot Tagging of Low-Resource Languages
Slot tagging is a key component in a task-oriented dialogue system. Conversational agents need to understand human input by training on …
何可清
,
严渊蒙
,
徐蔚然
PDF
引用
DOI
IJCNN 2020
Learning to Tag OOV Tokens by Integrating Contextual Representation and Background Knowledge
Neural-based context-aware models for slot tagging have achieved state-of-the-art performance. However, the presence of …
何可清
,
严渊蒙
,
徐蔚然
PDF
引用
DOI
ACL 2020
Generative Adversarial Zero-Shot Relation Learning for Knowledge Grapths
Large-scale knowledge graphs (KGs) are shown to become more important in current information systems. To expand the coverage of KGs, …
秦鹏达
,
XinWang
,
WenhuChen
,
ChunyunZhang
,
徐蔚然
,
WilliamYangWang
PDF
引用
DOI
AAAI 2020
Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning
Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not …
秦鹏达
,
徐蔚然
,
WilliamYangWang
PDF
引用
代码
DOI
ACL 2018
DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction
Distant supervision can effectively label data for relation extraction, but suffers from the noise labeling problem. Recent works …
秦鹏达
,
徐蔚然
,
WilliamYangWang
PDF
引用
DOI
ACL 2018
引用
×