TY - GEN
T1 - TGAT-DGL
T2 - 2024 International Joint Conference on Neural Networks, IJCNN 2024
AU - Gao, Xiaoqian
AU - Zhou, Xiabing
AU - Cao, Rui
AU - Zhang, Min
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Multi-party dialogue reading comprehension is an extraction-based reading comprehension task that aims to understand dialogue with multiple interlocutors and answer related questions. The frequent rotation of topics and the irregular order of interlocutors in dialogues may lead to the scattered distribution of information in multi-party dialogues. This means that the model needs to effectively integrate information across multiple utterances and among various interlocutors. Although previous methods have made considerable efforts in mining and modeling dialogue-related features, they still encounter two key issues. On the one hand, these methods failed to solve the cross-utterance co-reference problem that arises from the coexistence of multiple topics and interlocutors in the dialogue. On the other hand, they mostly ignored the joint reasoning of multi-granularity dialogue-related features, which can parse the semantic space of multi-party dialogue from coarse to fine. To overcome these bottlenecks, we propose a dual-granularity information joint reasoning method, which performs hierarchically semantic modeling for multi-party dialogue based on the graph attention networks. Specifically, we utilize discourse dependency relationships and interlocutor-aware temporal information to conduct coarsegrained semantic modeling, and perform fine-grained semantic refinement by leveraging token-level co-reference relationships. Our method demonstrates stable and substantial performance improvement when using different pre-trained language models as backbones and achieves a new state-of-the-art on the benchmark corpora Molweni and FriendsQA.
AB - Multi-party dialogue reading comprehension is an extraction-based reading comprehension task that aims to understand dialogue with multiple interlocutors and answer related questions. The frequent rotation of topics and the irregular order of interlocutors in dialogues may lead to the scattered distribution of information in multi-party dialogues. This means that the model needs to effectively integrate information across multiple utterances and among various interlocutors. Although previous methods have made considerable efforts in mining and modeling dialogue-related features, they still encounter two key issues. On the one hand, these methods failed to solve the cross-utterance co-reference problem that arises from the coexistence of multiple topics and interlocutors in the dialogue. On the other hand, they mostly ignored the joint reasoning of multi-granularity dialogue-related features, which can parse the semantic space of multi-party dialogue from coarse to fine. To overcome these bottlenecks, we propose a dual-granularity information joint reasoning method, which performs hierarchically semantic modeling for multi-party dialogue based on the graph attention networks. Specifically, we utilize discourse dependency relationships and interlocutor-aware temporal information to conduct coarsegrained semantic modeling, and perform fine-grained semantic refinement by leveraging token-level co-reference relationships. Our method demonstrates stable and substantial performance improvement when using different pre-trained language models as backbones and achieves a new state-of-the-art on the benchmark corpora Molweni and FriendsQA.
KW - Coreference-aware information
KW - Dialogue reading comprehension
KW - Graph attention networks
UR - https://www.scopus.com/pages/publications/85204973062
U2 - 10.1109/IJCNN60899.2024.10651541
DO - 10.1109/IJCNN60899.2024.10651541
M3 - 会议稿件
AN - SCOPUS:85204973062
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2024 International Joint Conference on Neural Networks, IJCNN 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 30 June 2024 through 5 July 2024
ER -