Skip to content

Graph-and-Geometric-Learning/MindLLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 MindLLM: A Subject-Agnostic and Versatile Model for fMRI-to-Text Decoding

This is the official implementation of ICML 2025 paper MindLLM: A Subject-Agnostic and Versatile Model for fMRI-to-Text Decoding.

main figure

Installation

conda create -n mindllm python=3.10 -y
conda activate mindllm
pip install -r requirements.txt
pip install flash-attn==2.5.6 --no-build-isolation

# Java should be installed for some metrics

Dataset

Make sure you have git-lfs installed. The dataset is host on Huggingface

Pretrained checkpoints

Model weights can be downloaded from Huggingface.

Usage

Pretraining

Pretrain MindLLM on a single subject (e.g., subject 1)

python main.py group_by_coco=false

Only current-image-based datasets can use group_by_coco=True. As long as you include coco-caption-previous, this should be set to False.

Pretrain MindLLM on subjects 1-7

python main.py group_by_coco=false "subjects=[1,2,3,4,5,6,7]"

Finetuning

To finetune on downstream tasks (e.g., results in Table 2). For example,

# COCO QA
python main.py data.task=coco-qa data.split_val=true early_stop=true checkpoint=/path/to/mindllm-base.ckpt lr=1e-4

# A-OKVQA
python main.py data.batch_size=4 data.task=a-okvqa early_stop=true data.split_val=true checkpoint=mindllm-base.ckpt lr=5e-4

About

This is the official implementation of ICML 2025 paper MindLLM: A Subject-Agnostic and Versatile Model for fMRI-to-Text Decoding.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages