Skip to main navigation Skip to search Skip to main content

Interpretable local flow attention for multi-step traffic flow prediction

  • Xu Huang
  • , Bowen Zhang
  • , Shanshan Feng
  • , Yunming Ye*
  • , Xutao Li
  • *Corresponding author for this work
  • School of Computer Science and Technology, Harbin Institute of Technology
  • Shenzhen Technology University
  • Peng Cheng Laboratory

Research output: Contribution to journalArticlepeer-review

Abstract

Traffic flow prediction (TFP) has attracted increasing attention with the development of smart city. In the past few years, neural network-based methods have shown impressive performance for TFP. However, most of previous studies fail to explicitly and effectively model the relationship between inflows and outflows. Consequently, these methods are usually uninterpretable and inaccurate. In this paper, we propose an interpretable local flow attention (LFA) mechanism for TFP, which yields three advantages. (1) LFA is flow-aware. Different from existing works, which blend inflows and outflows in the channel dimension, we explicitly exploit the correlations between flows with a novel attention mechanism. (2) LFA is interpretable. It is formulated by the truisms of traffic flow, and the learned attention weights can well explain the flow correlations. (3) LFA is efficient. Instead of using global spatial attention as in previous studies, LFA leverages the local mode. The attention query is only performed on the local related regions. This not only reduces computational cost but also avoids false attention. Based on LFA, we further develop a novel spatiotemporal cell, named LFA-ConvLSTM (LFA-based convolutional long short-term memory), to capture the complex dynamics in traffic data. Specifically, LFA-ConvLSTM consists of three parts. (1) A ConvLSTM module is utilized to learn flow-specific features. (2) An LFA module accounts for modeling the correlations between flows. (3) A feature aggregation module fuses the above two to obtain a comprehensive feature. Extensive experiments on two real-world datasets show that our method achieves a better prediction performance. We improve the RMSE metric by 3.2%–4.6%, and the MAPE metric by 6.2%–6.7%. Our LFA-ConvLSTM is also almost 32% faster than global self-attention ConvLSTM in terms of prediction time. Furthermore, we also present some visual results to analyze the learned flow correlations.

Original languageEnglish
Pages (from-to)25-38
Number of pages14
JournalNeural Networks
Volume161
DOIs
StatePublished - Apr 2023
Externally publishedYes

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 11 - Sustainable Cities and Communities
    SDG 11 Sustainable Cities and Communities

Keywords

  • Attention mechanism
  • Explainable artificial intelligence
  • Neural networks
  • Traffic flow prediction

Fingerprint

Dive into the research topics of 'Interpretable local flow attention for multi-step traffic flow prediction'. Together they form a unique fingerprint.

Cite this