Abstract
Deep neural networks are driving the iterative advancement of magnetoencephalography (MEG) decoding models. While explainable artificial intelligence, particularly traditional post-hoc feature attribution approaches, has made significant progress in interpreting the prediction mechanisms of individual models, a critical gap remains in understanding the differences in decision logic between various models, known as model differencing. By facilitating model selection, optimization updates, and practical applications such as error pattern analysis and decision fusion, this approach demonstrates significant research and application potential. However, existing approaches face fundamental limitations: insufficient accuracy in differencing measurements, often relying on binary simplification, and deficient localization of differencing decision boundaries, typically constrained to low-dimensional spaces. To address these challenges, we propose a rule-based MEG model differencing approach called boundary-optimized rules via predict-probability difference (BO-RPPD). Key innovations include (1) a novel measurement based on predict-probability differences between dual models, enabling the direct learning of differencing rules, and (2) integrated counterfactual generation and feature reduction to guide the exploration of optimal predict-probability difference boundaries, especially within low-sample, high-dimensional MEG feature spaces. Experiments on two MEG datasets demonstrate the overall superiority of our proposed approach: prediction performance significantly outperforms all benchmarks, achieving up to a 24% improvement in F1-score and effectively covering broader samples. The number of generated rules matches the optimal benchmark, ensuring strong explainability. The approach shows practical value in error pattern analysis and decision fusion. Model-agnostic, our approach generalizes effectively to electroencephalography (EEG) and structured datasets.
| Original language | English |
|---|---|
| Article number | e70230 |
| Journal | Annals of the New York Academy of Sciences |
| Volume | 1557 |
| Issue number | 1 |
| DOIs | |
| State | Published - Mar 2026 |
| Externally published | Yes |
Keywords
- counterfactual generation
- decision rule
- explainability
- magnetoencephalography
- model differencing
Fingerprint
Dive into the research topics of 'Explainable Model Differencing for MEG Decoding via Predict-Probability Differences and Boundary-Optimized Rules'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver