TY - GEN
T1 - Enhancing Numerical Reasoning with the Guidance of Reliable Reasoning Processes
AU - Wang, Dingzirui
AU - Dou, Longxu
AU - Zhang, Xuanliang
AU - Zhu, Qingfu
AU - Che, Wanxiang
N1 - Publisher Copyright:
© 2024 Association for Computational Linguistics.
PY - 2024
Y1 - 2024
N2 - Numerical reasoning is an essential ability for NLP systems to handle numeric information. Recent research indicates that fine-tuning a small-scale model to learn generating reasoning processes alongside answers can significantly enhance performance. However, current methods have the limitation that most methods generate reasoning processes with large language models (LLMs), which are “unreliable” since such processes could contain information unrelated to the answer. To address this limitation, we introduce Enhancing NumeriCal reasOning with Reliable procEsses (ENCORE), which derives the reliable reasoning process by decomposing the answer formula, ensuring which fully supports the answer. Nevertheless, models could lack enough data to learn the reasoning process generation adequately, since our method generates only one single reasoning process for one formula. To overcome this difficulty, we present a series of pre-training tasks to help models learn the reasoning process generation with synthesized data. The experiments show that ENCORE yields improvement on all five experimental datasets with an average of 1.8%, proving the effectiveness of our method.
AB - Numerical reasoning is an essential ability for NLP systems to handle numeric information. Recent research indicates that fine-tuning a small-scale model to learn generating reasoning processes alongside answers can significantly enhance performance. However, current methods have the limitation that most methods generate reasoning processes with large language models (LLMs), which are “unreliable” since such processes could contain information unrelated to the answer. To address this limitation, we introduce Enhancing NumeriCal reasOning with Reliable procEsses (ENCORE), which derives the reliable reasoning process by decomposing the answer formula, ensuring which fully supports the answer. Nevertheless, models could lack enough data to learn the reasoning process generation adequately, since our method generates only one single reasoning process for one formula. To overcome this difficulty, we present a series of pre-training tasks to help models learn the reasoning process generation with synthesized data. The experiments show that ENCORE yields improvement on all five experimental datasets with an average of 1.8%, proving the effectiveness of our method.
UR - https://www.scopus.com/pages/publications/85204449673
U2 - 10.18653/v1/2024.acl-long.582
DO - 10.18653/v1/2024.acl-long.582
M3 - 会议稿件
AN - SCOPUS:85204449673
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 10812
EP - 10828
BT - Long Papers
A2 - Ku, Lun-Wei
A2 - Martins, Andre F. T.
A2 - Srikumar, Vivek
PB - Association for Computational Linguistics (ACL)
T2 - 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
Y2 - 11 August 2024 through 16 August 2024
ER -