Abstract
Named entity recognition (NER) is one fundamental task in natural language processing, which is typically addressed by neural condition random field (CRF) models, regarding the task as a sequence labeling problem. Sentence-level information has been shown positive for the task. Equipped with sophisticated neural structures such as long-short term memory network (LSTM), implicit sentence-level global information can be exploited fully, and has also been demonstrated effective in previous studies. In this work, we propose a new method for better learning of these sentence-level features in an explicit manner. Concretely, we suggest an auxiliary task, namely sentence-level named type prediction (i.e., determining whether a sentence includes a certain kind of named type), to supervise the feature representation learning globally. We conduct experiments on six benchmark datasets of various languages to evaluate our method. The results show that our final model is highly effective, resulting in significant improvements and leading to highly competitive results on all datasets.
| Original language | English |
|---|---|
| Article number | 9388880 |
| Pages (from-to) | 1438-1448 |
| Number of pages | 11 |
| Journal | IEEE/ACM Transactions on Audio Speech and Language Processing |
| Volume | 29 |
| DOIs | |
| State | Published - 2021 |
| Externally published | Yes |
Keywords
- Joint model
- learning with auxiliary tasks
- multi-task learning
- named entity recognition
Fingerprint
Dive into the research topics of 'A Joint Model for Named Entity Recognition with Sentence-Level Entity Type Attentions'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver