MedicoSAM: Robust Improvement of SAM for Medical Imaging

Authors

Archit A, Freckmann L, Pape C

Journal

IEEE Transactions on Medical Imaging

Citation

IEEE Trans Med Imaging. 2025 Dec 17;PP.

Abstract

Medical image segmentation is an important analysis task in clinical practice and research. Deep learning has massively advanced the field, but current approaches are mostly based on models trained for a specific task. Training such models or adapting them to a new condition is costly due to the need for labeled data. The emergence of vision foundation models, especially Segment Anything Model (SAM), offers a path to universal segmentation for medical images, overcoming these issues. Here, we study how to improve SAM for medical images by comparing different finetuning strategies on a large and diverse dataset. We evaluate the finetuned models on a wide range of interactive and automatic semantic segmentation tasks. We find that performance clearly improves given the correct choice of finetuning strategies. This improvement is especially pronounced for interactive segmentation. Semantic segmentation also benefits, but the advantage over traditional segmentation approaches is inconsistent. Our best model, MedicoSAM, is publicly available. We show that it is compatible with existing tools for data annotation and believe that it will be of great practical value.

DOI

10.1109/TMI.2025.3644811
 
Pubmed Link