TY - GEN
T1 - Learning Domain Invariant Word Representations for Parsing Domain Adaptation
AU - Qiao, Xiuming
AU - Zhang, Yue
AU - Zhao, Tiejun
N1 - Publisher Copyright:
© 2019, Springer Nature Switzerland AG.
PY - 2019
Y1 - 2019
N2 - We show that strong domain adaptation results for dependency parsing can be achieved using a conceptually simple method that learns domain-invariant word representations. Lacking labeled resources, dependency parsing for low-resource domains has been a challenging task. Existing work considers adapting a model trained on a resource-rich domain to low-resource domains. A mainstream solution is to find a set of shared features across domains. For neural network models, word embeddings are a fundamental set of initial features. However, little work has been done investigating this simple aspect. We propose to learn domain-invariant word representations by fine-tuning pretrained word representations adversarially. Our parser achieves error reductions of 5.6% UAS, 7.9% LAS on PTB respectively, and 4.2% UAS, 3.2% LAS on Genia respectively, showing the effectiveness of domain invariant word representations for alleviating lexical bias between source and target data.
AB - We show that strong domain adaptation results for dependency parsing can be achieved using a conceptually simple method that learns domain-invariant word representations. Lacking labeled resources, dependency parsing for low-resource domains has been a challenging task. Existing work considers adapting a model trained on a resource-rich domain to low-resource domains. A mainstream solution is to find a set of shared features across domains. For neural network models, word embeddings are a fundamental set of initial features. However, little work has been done investigating this simple aspect. We propose to learn domain-invariant word representations by fine-tuning pretrained word representations adversarially. Our parser achieves error reductions of 5.6% UAS, 7.9% LAS on PTB respectively, and 4.2% UAS, 3.2% LAS on Genia respectively, showing the effectiveness of domain invariant word representations for alleviating lexical bias between source and target data.
KW - Dependency parsing
KW - Domain adaptation
KW - Generative Adversarial Network
KW - Wasserstein distance
KW - Word representations
UR - https://www.scopus.com/pages/publications/85075570413
U2 - 10.1007/978-3-030-32233-5_62
DO - 10.1007/978-3-030-32233-5_62
M3 - 会议稿件
AN - SCOPUS:85075570413
SN - 9783030322328
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 801
EP - 813
BT - Natural Language Processing and Chinese Computing - 8th CCF International Conference, NLPCC 2019, Proceedings
A2 - Tang, Jie
A2 - Kan, Min-Yen
A2 - Zhao, Dongyan
A2 - Li, Sujian
A2 - Zan, Hongying
PB - Springer
T2 - 8th CCF International Conference on Natural Language Processing and Chinese Computing, NLPCC 2019
Y2 - 9 October 2019 through 14 October 2019
ER -