Skip to content

zmzhang2000/MMMC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Robust Multimodal Large Language Models Against Modality Conflict

Robust Multimodal Large Language Models Against Modality Conflict

Official repository for
Robust Multimodal Large Language Models Against Modality Conflict

GitHub stars HF dataset OpenReview Paper


🌟 Overview

This repository provides the code and dataset for our paper:
Robust Multimodal Large Language Models Against Modality Conflict.


πŸ“¦ Multimodal Modality Conflict (MMMC) Dataset

The MMMC dataset is available on the Hugging Face Hub. You can easily download and use it as follows:

from datasets import load_dataset

dataset = load_dataset("ustc-zhangzm/MMMC")

Note: The dataset is generated by large language models and may contain some noise. We recommend using the dataset for research purposes only.


πŸš€ Improving the Robustness of MLLMs

We provide code for supervised fine-tuning and reinforcement learning to enhance the robustness of Multimodal Large Language Models (MLLMs) under modality conflict scenarios.

  • Please follow the documentation for instructions on running the code.
  • Detailed explanations of these methods are available in our paper.

πŸ“„ License

This dataset is distributed under the CC BY-SA 3.0 license.


πŸ“– Citation

If you find this work helpful for your research, please cite our paper:

@inproceedings{
    zhang2025robust,
    title={Robust Multimodal Large Language Models Against Modality Conflict},
    author={Zongmeng Zhang and Wengang Zhou and Jie Zhao and Houqiang Li},
    booktitle={Forty-second International Conference on Machine Learning},
    year={2025},
    url={https://openreview.net/forum?id=SP43jVv7fJ}
}

About

Official repository for Robust Multimodal Large Language Models Against Modality Conflict

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published