Skip to main navigation Skip to search Skip to main content

ExpMRC: explainability evaluation for machine reading comprehension

Research output: Contribution to journalArticlepeer-review

Abstract

Achieving human-level performance on some Machine Reading Comprehension (MRC) datasets is no longer challenging with the help of powerful Pre-trained Language Models (PLMs). However, it is necessary to provide both answer prediction and its explanation to further improve the MRC system's reliability, especially for real-life applications. In this paper, we propose a new benchmark called ExpMRC for evaluating the textual explainability of the MRC systems. ExpMRC contains four subsets, including SQuAD, CMRC 2018, RACE+, and C3, with additional annotations of the answer's evidence. The MRC systems are required to give not only the correct answer but also its explanation. We use state-of-the-art PLMs to build baseline systems and adopt various unsupervised approaches to extract both answer and evidence spans without human-annotated evidence spans. The experimental results show that these models are still far from human performance, suggesting that the ExpMRC is challenging. Resources (data and baselines) are available through https://github.com/ymcui/expmrc.

Original languageEnglish
Article numbere09290
JournalHeliyon
Volume8
Issue number4
DOIs
StatePublished - Apr 2022

Keywords

  • Explainable artificial intelligence
  • Machine reading comprehension
  • Natural language processing

Fingerprint

Dive into the research topics of 'ExpMRC: explainability evaluation for machine reading comprehension'. Together they form a unique fingerprint.

Cite this