Adrien Meyer, Aditya Murali, Didier Mutter, Nicolas Padoy
Click to expand Install
You may need to install a specific version of PyTorch, depending on your hardware.Create a conda environment and activate it.
conda create --name UltraSam python=3.8 -y
conda activate UltraSam
Install the OpenMMLab suite and other dependencies
pip install -U openmim
mim install mmengine
mim install "mmcv>=2.0.0"
mim install mmdet
mim install mmpretrain
If you wish to process the datasets;
pip install SimpleITK
pip install scikit-image
pip install scipy
Pre-trained UltraSam model checkpoint is accessible at this link.
To train / test, you will need a coco.json annotation file, and create a symbolik link to it, or modify the config files to point to your annotation file.
To train from scratch, you can use the code in weights
to download and convert SAM, MEDSAM and adapters weights.
In local, inside UltraSam repo;
export PYTHONPATH=$PYTHONPATH:.
mim train mmdet configs/UltraSAM/UltraSAM_full/UltraSAM_point_refine.py --gpus 4 --launcher pytorch --work-dir ./work_dirs/UltraSam
mim test mmdet configs/UltraSAM/UltraSAM_full/UltraSAM_point_refine.py --checkpoint ./work_dirs/UltraSam/iter_30000.pth
mim test mmdet configs/UltraSAM/UltraSAM_full/UltraSAM_box_refine.py --checkpoint ./work_dirs/UltraSam/iter_30000.pth
mim train mmpretrain configs/UltraSAM/UltraSAM_full/downstream/classification/BUSBRA/resnet50.py \
--work-dir ./work_dirs/classification/BUSBRA/resnet
mim train mmpretrain configs/UltraSAM/UltraSAM_full/downstream/classification/BUSBRA/MedSAM.py \
--work-dir ./work_dirs/classification/BUSBRA/MedSam
mim train mmpretrain configs/UltraSAM/UltraSAM_full/downstream/classification/BUSBRA/SAM.py \
--work-dir ./work_dirs/classification/BUSBRA/Sam
mim train mmpretrain configs/UltraSAM/UltraSAM_full/downstream/classification/BUSBRA/UltraSam.py \
--work-dir ./work_dirs/classification/BUSBRA/UltraSam
mim train mmpretrain configs/UltraSAM/UltraSAM_full/downstream/classification/BUSBRA/ViT.py \
--work-dir ./work_dirs/classification/BUSBRA/ViT
mim train mmdet configs/UltraSAM/UltraSAM_full/downstream/segmentation/BUSBRA/resnet.py \
--work-dir ./work_dirs/segmentation/BUSBRA/resnet
mim train mmdet configs/UltraSAM/UltraSAM_full/downstream/segmentation/BUSBRA/UltraSam.py \
--work-dir ./work_dirs/segmentation/BUSBRA/UltraSam_3000
mim train mmdet configs/UltraSAM/UltraSAM_full/downstream/segmentation/BUSBRA/SAM.py \
--work-dir ./work_dirs/segmentation/BUSBRA/SAM
mim train mmdet configs/UltraSAM/UltraSAM_full/downstream/segmentation/BUSBRA/MedSAM.py \
--work-dir ./work_dirs/segmentation/BUSBRA/MedSAM
Ultrasound imaging presents a substantial domain gap compared to other medical imaging modalities; building an ultrasound-specific foundation model therefore requires a specialized large-scale dataset. To build such a dataset, we crawled a multitude of platforms for ultrasound data. We arrived at US-43d, a collection of 43 datasets covering 20 different clinical applications, containing over 280,000 annotated segmentation masks from both 2D and 3D scans.
Click to expand datasets table
Dataset | Link |
---|---|
105US | researchgate |
AbdomenUS | kaggle |
ACOUSLIC | grand-challenge |
ASUS | onedrive |
AUL | zenodo |
brachial plexus | github |
BrEaST | cancer imaging archive |
BUID | qamebi |
BUS_UC | mendeley |
BUS_UCML | mendeley |
BUS-BRA | github |
BUS (Dataset B) | mmu |
BUSI | HomePage |
CAMUS | insa-lyon |
CardiacUDC | kaggle |
CCAUI | mendeley |
DDTI | github |
EchoCP | kaggle |
EchoNet-Dynamic | github |
EchoNet-Pediatric | github |
FALLMUD | kalisteo |
FASS | mendeley |
Fast-U-Net | github |
FH-PS-AOP | zenodo |
GIST514-DB | github |
HC | grand-challenge |
kidneyUS | github |
LUSS_phantom | Leeds |
MicroSeg | zenodo |
MMOTU-2D | github |
MMOTU-3D | github |
MUP | zenodo |
regPro | HomePage |
S1 | ncbi |
Segthy | TUM |
STMUS_NDA | mendeley |
STU-Hospital | github |
TG3K | github |
Thyroid US Cineclip | standford |
TN3K | github |
TNSCUI | grand-challenge |
UPBD | HomePage |
US nerve Segmentation | kaggle |
Once you downloaded the datasets:
Run each converter in datasets/datasets
# run coco converters
# then preprocessing
python datasets/tools/merge_subdir_coco.py
python datasets/tools/split_coco.py
python datasets/tools/create_agnostic_coco.py path_to_datas_root --mode train
python datasets/tools/create_agnostic_coco.py path_to_datas_root --mode val
python datasets/tools/create_agnostic_coco.py path_to_datas_root --mode test
python datasets/tools/merge_agnostic_coco.py path_to_datas_root path_to_datas_root/train.agnostic.noSmall.coco.json --mode train
python datasets/tools/merge_agnostic_coco.py path_to_datas_root path_to_datas_root/val.agnostic.noSmall.coco.json --mode val
python datasets/tools/merge_agnostic_coco.py path_to_datas_root path_to_datas_root/test.agnostic.noSmall.coco.json --mode test
If you find our work helpful for your research, please consider citing us using the following BibTeX entry:
@article{meyer2024ultrasam,
title={UltraSam: A Foundation Model for Ultrasound using Large Open-Access Segmentation Datasets},
author={Meyer, Adrien and Murali, Aditya and Mutter, Didier and Padoy, Nicolas},
journal={arXiv preprint arXiv:2411.16222},
year={2024}
}