Skip to main navigation Skip to search Skip to main content

Rethinking Multi-Focus Image Fusion: An Input Space Optimization View

  • Zeyu Wang
  • , Shuang Yu*
  • , Haoran Duan
  • , Shidong Wang
  • , Yang Long
  • , Ling Shao
  • *Corresponding author for this work
  • Dalian Minzu University
  • Faculty of Computing, Harbin Institute of Technology
  • College of Computer Science and Technology
  • Tsinghua University
  • Newcastle University
  • Durham University
  • University of Chinese Academy of Sciences

Research output: Contribution to journalArticlepeer-review

Abstract

Multi-focus image fusion (MFIF) addresses the challenge of partial focus by integrating multiple source images taken at different focal depths. Unlike most existing methods that rely on complex loss functions or large-scale synthetic datasets, this study approaches MFIF from a novel perspective: optimizing the input space. The core idea is to construct a high-quality MFIF input space in a cost-effective manner by using intermediate features from well-trained, non-MFIF networks. To this end, we propose a cascaded framework comprising two feature extractors, a Feature Distillation and Fusion Module (FDFM), and a focus segmentation network Y {}^{U} Net. Based on our observation that discrepancy and edge features are essential for MFIF, we select a image deblurring network and a salient object detection network as feature extractors. To transform these extracted features into an MFIF-suitable input space, we propose FDFM as a training-free feature adapter. To make FDFM compatible with high-dimensional feature maps, we extend the manifold theory from the edge-preserving field and design a novel isometric domain transformation. Extensive experiments on six benchmark datasets show that 1) our model consistently outperforms 13 state-of-the-art methods in both qualitative and quantitative evaluations, and 2) the constructed input space can directly enhance the performance of many MFIF models without additional requirements.

Original languageEnglish
Pages (from-to)1321-1336
Number of pages16
JournalIEEE Transactions on Image Processing
Volume35
DOIs
StatePublished - 2026
Externally publishedYes

Keywords

  • Multi-focus image fusion
  • edge preservation
  • feature extraction
  • input space optimisation
  • neural network

Fingerprint

Dive into the research topics of 'Rethinking Multi-Focus Image Fusion: An Input Space Optimization View'. Together they form a unique fingerprint.

Cite this