Abstract
Medical image analysis often suffers from insufficient data annotation. Pre-trained models can improve task performance when fine-tuned on medical data. Due to the domain differences, only a subset of the pre-trained parameters is supposed to be important to medical tasks. Current fine-tuning methods did not address the vital problems like “how many parameters should be fine-tuned” and “which group of parameters should be fine-tuned”. In this paper, we define the parameters whose removal incurs large loss increase of downstream task model as fundamental parameters. We observe that roughly 10 % pre-trained parameters are fundamental to various medical applications, and these parameters dominate the optimal convergence of pre-trained model. Based on this, we introduce a divide-and-conquer method for optimal pre-trained model adaption to medical tasks. We apply a penalty factor to the gradients of fundamental parameters to control the parameter updates, and fine-tune the remaining parameters normally to fully adapt them to the downstream tasks. Our method leads to obvious performance improvement over full-parameter and other state-of-the-art fine-tuning methods on multiple medical tasks, providing new perspectives to pre-trained model adaptation.
| Original language | English |
|---|---|
| Article number | 112949 |
| Journal | Pattern Recognition |
| Volume | 174 |
| DOIs | |
| State | Published - Jun 2026 |
| Externally published | Yes |
Keywords
- Back-propagation
- Fine-tuning
- Model adaptation
- Pre-trained model
Fingerprint
Dive into the research topics of 'Divide-and-conquer towards optimal adaptation of pre-trained model to medical tasks'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver