Skip to main navigation Skip to search Skip to main content

A Joint Model for Named Entity Recognition with Sentence-Level Entity Type Attentions

  • Tao Qian
  • , Meishan Zhang*
  • , Yinxia Lou
  • , Daiwen Hua
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Named entity recognition (NER) is one fundamental task in natural language processing, which is typically addressed by neural condition random field (CRF) models, regarding the task as a sequence labeling problem. Sentence-level information has been shown positive for the task. Equipped with sophisticated neural structures such as long-short term memory network (LSTM), implicit sentence-level global information can be exploited fully, and has also been demonstrated effective in previous studies. In this work, we propose a new method for better learning of these sentence-level features in an explicit manner. Concretely, we suggest an auxiliary task, namely sentence-level named type prediction (i.e., determining whether a sentence includes a certain kind of named type), to supervise the feature representation learning globally. We conduct experiments on six benchmark datasets of various languages to evaluate our method. The results show that our final model is highly effective, resulting in significant improvements and leading to highly competitive results on all datasets.

Original languageEnglish
Article number9388880
Pages (from-to)1438-1448
Number of pages11
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Volume29
DOIs
StatePublished - 2021
Externally publishedYes

Keywords

  • Joint model
  • learning with auxiliary tasks
  • multi-task learning
  • named entity recognition

Fingerprint

Dive into the research topics of 'A Joint Model for Named Entity Recognition with Sentence-Level Entity Type Attentions'. Together they form a unique fingerprint.

Cite this