-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
Fix(gemma3_mm): Add robust weight loading for quantized VLM #20066
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Vensenmu <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Flink-ddd, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a critical fix for loading quantized Gemma-3 multi-modal models by revamping the weight loading process. It resolves a persistent KeyError by implementing a custom, intelligent weight mapping strategy that specifically targets and corrects naming mismatches within the vision model components, ensuring seamless model initialization and preventing regressions.
Highlights
- Robust Weight Loading: Implemented a custom, more robust weight loading mechanism within
Gemma3ForConditionalGeneration.load_weightsto definitively fixKeyErrorissues encountered when serving Gemma-3 multi-modal models quantized byllm-compressor. - Targeted Vision Weight Fix: The new weight loading logic includes a 'surgically precise conditional check' that specifically addresses naming inconsistencies for vision model weights. It first applies standard mapping and then, if a vision weight is still not found, it intelligently re-prefixes it to resolve a 'double prefix' problem, ensuring correct loading.
- Architectural Improvement: Replaced the generic
AutoWeightsLoaderwith imperative, fine-grained control directly within theload_weightsmethod. This approach provides better precision, avoids unintended side effects on other model loading paths, and is a more architecturally sound solution.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
The pull request introduces a custom weight loading mechanism for Gemma3 VLM models to address weight naming inconsistencies, specifically a 'double prefix' issue within the vision model component. The solution involves a targeted fix within the Gemma3ForConditionalGeneration.load_weights method, applying a conditional check to remap only the problematic vision weights. The changes look good, and I've provided some suggestions for minor improvements.
| # Silently skip any weights that are still not found. | ||
| loaded_params.add(original_name) | ||
| continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's generally better to log a warning message when skipping weights, even if it's done silently. This can aid in debugging if unexpected weights are not loaded. Consider using logger.warning.
| # Silently skip any weights that are still not found. | |
| loaded_params.add(original_name) | |
| continue | |
| if name not in params_dict: | |
| logger.warning(f"Skipping weight {original_name} as it is not found in the model.") | |
| loaded_params.add(original_name) | |
| continue |
| param = params_dict[name] | ||
| weight_loader = getattr(param, "weight_loader", | ||
| default_weight_loader) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider checking if param has the attribute weight_loader before calling getattr. This can prevent potential AttributeError exceptions if a parameter unexpectedly lacks this attribute.
| param = params_dict[name] | |
| weight_loader = getattr(param, "weight_loader", | |
| default_weight_loader) | |
| param = params_dict[name] | |
| weight_loader = (getattr(param, "weight_loader", None) or | |
| default_weight_loader) |
|
BTW, I wonder if the weight name issue is from Transformers v4.52, because we added extra model conversion in that release. I tried to reproduce this issue with llm-compressor locally yesterday but failed, can you upload the problematic checkpoint so that I can test it locally? |
|
Hi @Isotr0py, Thank you for the quick follow-up and the very helpful insight about Transformers v4.52 – that's likely the key to this naming convention. I completely agree that testing with the original artifact is the best path forward. I also tried to reproduce the full To make this possible, I'll ask the original reporter for the checkpoint. Hey @giangtapergo #1546 , would you be able to help the we verify this fix? If you could upload your quantized Gemma-3 checkpoint to a new repository on the Hugging Face Hub and share the link here, it would be a massive help. That would allow we to test this PR directly against the exact artifact that causes the error and move this fix forward. Thank you both for your collaboration on this! |
|
Hi @Isotr0py , The original reporter, @giangtapergo, has very helpfully uploaded the problematic checkpoint to the Hugging Face Hub. The repository is: https://huggingface.co/ntgiang71096/gemma-3-4b-it-W4A16-G128-1000 Please let me know if there's anything else needed from my side. |
|
I believe the root issue is not about weight loading, but how The |
|
Hi @Isotr0py , Thank you so much for the deep-dive and definitive diagnosis! This makes perfect sense and clarifies the entire situation. Based on your findings that the root cause is in how I will now close this Pull Request |
Hi @Isotr0py ,
This PR provides a definitive fix for the
KeyErrorencountered when serving Gemma-3 multi-modal models quantized byllm-compressor.This new PR is a follow-up to the reverted PR #19643 and addresses the regression reported in #1546 by @giangntapero
implementing a more robust and architecturally sound solution based on your invaluable feedback.
Summary of Findings
The investigation revealed a two-layer problem:
KeyErrorwas caused by a weight name mismatch inside theSiglipVisionModelcomponent.siglip.pyfixed theKeyErrorbut accept maintainer advice and then accroding to gemma3_mm.py file to updated.gemma3_mm.pyloader via a simpleWeightsMapperwas architecturally sound but failed with a newValueError, proving that a more nuanced, imperative logic was required.The Final, Robust Solution
This PR implements the fix in the most appropriate location: the
Gemma3ForConditionalGeneration.load_weightsmethod ingemma3_mm.py.Instead of a simple mapper, a custom loading logic is now used. This logic is "intelligent":
hf_to_vllm_mapperfor standard prefix stripping.if name not in params_dict and name.startswith("vision_model."):) to remap only the problematic vision weights.This targeted approach ensures that the fix is only applied to the multi-modal use case, completely avoiding any side effects on other model loading paths and thus resolving the regression.
Verification
The fix was definitively verified in a stable cloud GPU environment (RunPod A100) using a self-contained script.
google/gemma-3-4b-itmodel and renaming its vision weights in memory to simulate the exact naming convention that causes the bug.Result: The fix is confirmed to be working. The vLLM engine launched successfully, passing the weight loading phase without any
KeyErrororValueError, and began initializing.See screenshot of the successful
vllm serverun from the test environment below:Thank you for your guidance and patience in helping me navigate this complex issue. The PR is now ready for your review and the official CI run.