Skip to main navigation Skip to search Skip to main content

LLM-Assisted Data Augmentation for Chinese Dialogue-Level Dependency Parsing

Research output: Contribution to journalArticlepeer-review

Abstract

Dialogue-level dependency parsing, despite its growing academic interest, often encounters underperformance issues due to resource shortages. A potential solution to this challenge is data augmentation. In recent years, large language models (LLMs) have demonstrated strong capabilities in generation, which can facilitate data augmentation greatly. In this study, we focus on Chinese dialogue-level dependency parsing, presenting three simple and effective strategies with LLM to augment the original training instances, namely word-level, syntax-level, and discourse-level augmentations, respectively. These strategies enable LLMs to either preserve or modify dependency structures, thereby assuring accuracy while increasing the diversity of instances at different levels. We conduct experiments on the benchmark dataset released by Jiang et al. (2023) to validate our approach. Results show that our method can greatly boost the parsing performance in various settings, particularly in dependencies among elementary discourse units. Lastly, we provide in-depth analysis to show the key points of our data augmentation strategies.

Original languageEnglish
Pages (from-to)867-891
Number of pages25
JournalComputational Linguistics
Volume50
Issue number3
DOIs
StatePublished - Sep 2024
Externally publishedYes

Fingerprint

Dive into the research topics of 'LLM-Assisted Data Augmentation for Chinese Dialogue-Level Dependency Parsing'. Together they form a unique fingerprint.

Cite this