Abstract
Purpose – This study aims to identify the most effective explanatory strategies for building public trust in government-use AI-based algorithmic recommendations. Design/methodology/approach – By comparing salient explanations and norm-based explanations across different age groups, we analyzed how these explanatory strategies reduce learning costs and enhance cognitive trust in the algorithm. Findings – The study finds that both salient and norm-based explanations can reduce learning costs and enhance users’ cognitive trust in algorithms; however, norm-based explanations are particularly effective for younger users. Additionally, the study finds no significant interaction between the two types of explanations. Importantly, effective explanations can enhance both cognitive trust in the algorithm and affective trust in the government. Originality/value – This research suggests that “nudges” in explanations can enhance citizens’ trust in algorithmic public services, which is significant for increasing acceptance of these algorithms.
| Original language | English |
|---|---|
| Pages (from-to) | 1-22 |
| Number of pages | 22 |
| Journal | Information Technology and People |
| DOIs | |
| State | Accepted/In press - 2026 |
| Externally published | Yes |
Keywords
- Algorithmic public service
- Artificial intelligence
- Learning costs
- Nudges
- Recommendation agent
Fingerprint
Dive into the research topics of 'Nudges affect the perceived trustworthiness of algorithmic recommendations in public services: explaining by learning costs'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver