TY - GEN
T1 - Improving Conversational Aspect-Based Sentiment Quadruple Analysis with Overall Modeling
AU - Cai, Chenran
AU - Zhao, Qin
AU - Xu, Ruifeng
AU - Qin, Bing
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - In this paper, we describe the experimental schemes of Team HLT-base for NLPCC-2023-Shared-Task-4 Conversational Aspect-based Sentiment Quadruple Analysis (ConASQ). Different from the aspect-based sentiment quadruple analysis task, the ConASQ task requires modeling the relationship between different utterances in context. Previous works commonly apply the attention mechanism (e.g., self-attention, transformer) to model the interaction of different utterances after extracting the feature of each utterance. However, this approach may not capture the interaction of different utterances effectively with a single self-attention layer or a transformer layer. To address this issue, we propose a simple and efficient method in this paper. Specially, we concatenate all utterances as a single sentence and feed this sentence into the pre-trained model, which can better construct the representation of utterances from scratch. Then, we utilize different mask matrices to model the features of dialogue threads, speakers, and replies. Finally, we apply the gird-tagging method to quadruple extraction. Extensive experimental results show that our proposed framework outperforms other competitive methods and achieves 2nd performance in the ConASQ competition.
AB - In this paper, we describe the experimental schemes of Team HLT-base for NLPCC-2023-Shared-Task-4 Conversational Aspect-based Sentiment Quadruple Analysis (ConASQ). Different from the aspect-based sentiment quadruple analysis task, the ConASQ task requires modeling the relationship between different utterances in context. Previous works commonly apply the attention mechanism (e.g., self-attention, transformer) to model the interaction of different utterances after extracting the feature of each utterance. However, this approach may not capture the interaction of different utterances effectively with a single self-attention layer or a transformer layer. To address this issue, we propose a simple and efficient method in this paper. Specially, we concatenate all utterances as a single sentence and feed this sentence into the pre-trained model, which can better construct the representation of utterances from scratch. Then, we utilize different mask matrices to model the features of dialogue threads, speakers, and replies. Finally, we apply the gird-tagging method to quadruple extraction. Extensive experimental results show that our proposed framework outperforms other competitive methods and achieves 2nd performance in the ConASQ competition.
KW - Conversation Aspect-based Sentiment Quadruple
UR - https://www.scopus.com/pages/publications/85174488597
U2 - 10.1007/978-3-031-44699-3_14
DO - 10.1007/978-3-031-44699-3_14
M3 - 会议稿件
AN - SCOPUS:85174488597
SN - 9783031446986
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 149
EP - 161
BT - Natural Language Processing and Chinese Computing - 12th National CCF Conference, NLPCC 2023, Proceedings
A2 - Liu, Fei
A2 - Duan, Nan
A2 - Xu, Qingting
A2 - Hong, Yu
PB - Springer Science and Business Media Deutschland GmbH
T2 - 12th National CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2023
Y2 - 12 October 2023 through 15 October 2023
ER -