Skip to content

FieteLab/ImageNet-RIB

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ImageNet-RIB Benchmark: Large Pre-Training Datasets Don't Guarantee Robustness after Fine-Tuning

TL;DR: We demonstrate that models pretrained on larger datasets can exhibit poorer robustness after fine-tuning compared to models pretrained on smaller datasets when the fine-tuning dataset is small. We analyze this phenomenon using the proposed benchmark.

Pipeline

Download Dataset

Please refer to the instruction from the each dataset.

Running

python3 main.py  --root  datasets/ --batch_size 64 --epochs 10 \
  --arch $arch --patch_size $patch_size --d_pre $pretrained --model $model \
  --regularization $reg --dataset $dataset --no_split --use_wandb

Results

Rader
OOD accuracy (robustness) of a ViT-B/16 model pretrained on two different datasets (LAION-2B, IN-21K), before and after fine-tuning on ImageNet-R.

Bar
Severe robustness loss from fine-tuning models pretrained on LAION-2B and OpenAI relative to fine-tuning models pretrained only on smaller datasets.

Acknowledgement

This repository is based on multiple repository including

Citation

If you use this code for your research, please cite our paper.

@inproceedings{hwang2024imagenet,
      title={ImageNet-RIB Benchmark: Large Pre-Training Datasets Don't Guarantee Robustness after Fine-Tuning},
      author={Hwang, Jaedong and Cheung, Brian and Hong, Zhang-Wei and Boopathy, Akhilan and Agrawal, Pulkit and Fiete, Ila R},
      booktitle={NeurIPSW on Fine-Tuning in Modern Machine Learning: Principles and Scalability},
      year={2024}
    }

About

ImageNet-RIB Benchmark: Large Pre-Training Datasets Don't Guarantee Robustness after Fine-Tuning (NeurIPSW 2024)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages