Abstract
Multimodal sentiment analysis can combine various types of modal information to make joint task decisions. In our experiment, however, we find that when the modalities in a sample contain different sentiment information, this sample negatively affects the accuracy of the overall analysis task. We attribute this problem to multimodal information imbalance. To resolve this problem, a multimodal interaction model (MIM) is proposed. In this paper, we use cross-attention to make the information among different modalities fully interactive and demonstrate the role of cross-attention in unimodal representation learning. Additionally, we use a subspace to learn specific features with the aims of reducing the redundancy of modal information and improving the effectiveness of the information interaction process. The proposed model is compared with baselines on the MOSI and MOSEI multimodal sentiment analysis datasets. The experimental results show that the proposed model achieves superior performance, which proves the effectiveness of our model in multimodal sentiment analysis tasks.
| Original language | English |
|---|---|
| Article number | 10 |
| Journal | Multimedia Systems |
| Volume | 30 |
| Issue number | 1 |
| DOIs | |
| State | Published - Feb 2024 |
| Externally published | Yes |
Keywords
- Crossmodal attention
- Multimodal fusion
- Multimodal sentiment analysis
- Subspace learning
Fingerprint
Dive into the research topics of 'Balanced sentimental information via multimodal interaction model'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver