Skip to main navigation Skip to search Skip to main content

Nudges affect the perceived trustworthiness of algorithmic recommendations in public services: explaining by learning costs

  • Yuan Sun
  • , Jianing Mi
  • , Luning Liu*
  • *Corresponding author for this work
  • School of Management, Harbin Institute of Technology

Research output: Contribution to journalArticlepeer-review

Abstract

Purpose – This study aims to identify the most effective explanatory strategies for building public trust in government-use AI-based algorithmic recommendations. Design/methodology/approach – By comparing salient explanations and norm-based explanations across different age groups, we analyzed how these explanatory strategies reduce learning costs and enhance cognitive trust in the algorithm. Findings – The study finds that both salient and norm-based explanations can reduce learning costs and enhance users’ cognitive trust in algorithms; however, norm-based explanations are particularly effective for younger users. Additionally, the study finds no significant interaction between the two types of explanations. Importantly, effective explanations can enhance both cognitive trust in the algorithm and affective trust in the government. Originality/value – This research suggests that “nudges” in explanations can enhance citizens’ trust in algorithmic public services, which is significant for increasing acceptance of these algorithms.

Original languageEnglish
Pages (from-to)1-22
Number of pages22
JournalInformation Technology and People
DOIs
StateAccepted/In press - 2026
Externally publishedYes

Keywords

  • Algorithmic public service
  • Artificial intelligence
  • Learning costs
  • Nudges
  • Recommendation agent

Fingerprint

Dive into the research topics of 'Nudges affect the perceived trustworthiness of algorithmic recommendations in public services: explaining by learning costs'. Together they form a unique fingerprint.

Cite this