Skip to content

Conversation

@Flink-ddd
Copy link
Contributor

@Flink-ddd Flink-ddd commented Jun 25, 2025

Hi @Isotr0py ,

This PR provides a definitive fix for the KeyError encountered when serving Gemma-3 multi-modal models quantized by llm-compressor.

This new PR is a follow-up to the reverted PR #19643 and addresses the regression reported in #1546 by @giangntapero
implementing a more robust and architecturally sound solution based on your invaluable feedback.

Summary of Findings

The investigation revealed a two-layer problem:

  1. The original KeyError was caused by a weight name mismatch inside the SiglipVisionModel component.
  2. I original design about A simple patch to siglip.py fixed the KeyError but accept maintainer advice and then accroding to gemma3_mm.py file to updated.
  3. An attempt to move the fix to the gemma3_mm.py loader via a simple WeightsMapper was architecturally sound but failed with a new ValueError, proving that a more nuanced, imperative logic was required.

The Final, Robust Solution

This PR implements the fix in the most appropriate location: the Gemma3ForConditionalGeneration.load_weights method in gemma3_mm.py.

Instead of a simple mapper, a custom loading logic is now used. This logic is "intelligent":

  • It first applies the existing class-level hf_to_vllm_mapper for standard prefix stripping.
  • It then applies a surgically precise conditional check (if name not in params_dict and name.startswith("vision_model."):) to remap only the problematic vision weights.

This targeted approach ensures that the fix is only applied to the multi-modal use case, completely avoiding any side effects on other model loading paths and thus resolving the regression.

Verification

The fix was definitively verified in a stable cloud GPU environment (RunPod A100) using a self-contained script.

  1. To isolate the fix and bypass environmental dependency conflicts, a test artifact was programmatically created by loading the official google/gemma-3-4b-it model and renaming its vision weights in memory to simulate the exact naming convention that causes the bug.
  2. The patched vLLM instance (with the changes from this PR) was then used to serve this simulated artifact.

Result: The fix is confirmed to be working. The vLLM engine launched successfully, passing the weight loading phase without any KeyError or ValueError, and began initializing.

See screenshot of the successful vllm serve run from the test environment below:

Screenshot 2025-06-25 at 16 20 06 Screenshot 2025-06-25 at 16 20 31

Thank you for your guidance and patience in helping me navigate this complex issue. The PR is now ready for your review and the official CI run.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Flink-ddd, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a critical fix for loading quantized Gemma-3 multi-modal models by revamping the weight loading process. It resolves a persistent KeyError by implementing a custom, intelligent weight mapping strategy that specifically targets and corrects naming mismatches within the vision model components, ensuring seamless model initialization and preventing regressions.

Highlights

  • Robust Weight Loading: Implemented a custom, more robust weight loading mechanism within Gemma3ForConditionalGeneration.load_weights to definitively fix KeyError issues encountered when serving Gemma-3 multi-modal models quantized by llm-compressor.
  • Targeted Vision Weight Fix: The new weight loading logic includes a 'surgically precise conditional check' that specifically addresses naming inconsistencies for vision model weights. It first applies standard mapping and then, if a vision weight is still not found, it intelligently re-prefixes it to resolve a 'double prefix' problem, ensuring correct loading.
  • Architectural Improvement: Replaced the generic AutoWeightsLoader with imperative, fine-grained control directly within the load_weights method. This approach provides better precision, avoids unintended side effects on other model loading paths, and is a more architecturally sound solution.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces a custom weight loading mechanism for Gemma3 VLM models to address weight naming inconsistencies, specifically a 'double prefix' issue within the vision model component. The solution involves a targeted fix within the Gemma3ForConditionalGeneration.load_weights method, applying a conditional check to remap only the problematic vision weights. The changes look good, and I've provided some suggestions for minor improvements.

Comment on lines +734 to +736
# Silently skip any weights that are still not found.
loaded_params.add(original_name)
continue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's generally better to log a warning message when skipping weights, even if it's done silently. This can aid in debugging if unexpected weights are not loaded. Consider using logger.warning.

Suggested change
# Silently skip any weights that are still not found.
loaded_params.add(original_name)
continue
if name not in params_dict:
logger.warning(f"Skipping weight {original_name} as it is not found in the model.")
loaded_params.add(original_name)
continue

Comment on lines +738 to +740
param = params_dict[name]
weight_loader = getattr(param, "weight_loader",
default_weight_loader)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider checking if param has the attribute weight_loader before calling getattr. This can prevent potential AttributeError exceptions if a parameter unexpectedly lacks this attribute.

Suggested change
param = params_dict[name]
weight_loader = getattr(param, "weight_loader",
default_weight_loader)
param = params_dict[name]
weight_loader = (getattr(param, "weight_loader", None) or
default_weight_loader)

@Isotr0py
Copy link
Member

BTW, I wonder if the weight name issue is from Transformers v4.52, because we added extra model conversion in that release.

I tried to reproduce this issue with llm-compressor locally yesterday but failed, can you upload the problematic checkpoint so that I can test it locally?

@Flink-ddd
Copy link
Contributor Author

Flink-ddd commented Jun 25, 2025

Hi @Isotr0py,

Thank you for the quick follow-up and the very helpful insight about Transformers v4.52 – that's likely the key to this naming convention.

I completely agree that testing with the original artifact is the best path forward. I also tried to reproduce the full llm-compressor quantization process myself but ran into significant dependency conflicts in the cloud environment, which confirms how tricky this setup can be.

To make this possible, I'll ask the original reporter for the checkpoint.

Hey @giangtapergo #1546 , would you be able to help the we verify this fix? If you could upload your quantized Gemma-3 checkpoint to a new repository on the Hugging Face Hub and share the link here, it would be a massive help.

That would allow we to test this PR directly against the exact artifact that causes the error and move this fix forward.

Thank you both for your collaboration on this!

@Flink-ddd
Copy link
Contributor Author

Hi @Isotr0py ,

The original reporter, @giangtapergo, has very helpfully uploaded the problematic checkpoint to the Hugging Face Hub. The repository is: https://huggingface.co/ntgiang71096/gemma-3-4b-it-W4A16-G128-1000

Please let me know if there's anything else needed from my side.

@Isotr0py
Copy link
Member

Isotr0py commented Jun 26, 2025

I believe the root issue is not about weight loading, but how llm-compressor create quantization_config.

  "quantization_config": {
    ...
    "format": "pack-quantized",
    "global_compression_ratio": null,
    "ignore": [
      "model.vision_tower.vision_model.encoder.layers.0.self_attn.k_proj",
      "model.vision_tower.vision_model.encoder.layers.0.self_attn.v_proj",
      "model.vision_tower.vision_model.encoder.layers.0.self_attn.q_proj",
       ...
      "model.vision_tower.vision_model.encoder.layers.26.mlp.fc1",
      "model.vision_tower.vision_model.encoder.layers.26.mlp.fc2",
      "lm_head"
    ],
    "kv_cache_scheme": null,
    "quant_method": "compressed-tensors",
    "quantization_status": "compressed"
  },

The ignore field doesn't match the weight_name with redundant model. prefix which is introduced by Transformers v4.52, I think it should be fixed at llm-compressor or Transformers side instead.

@Flink-ddd
Copy link
Contributor Author

Flink-ddd commented Jun 26, 2025

Hi @Isotr0py ,

Thank you so much for the deep-dive and definitive diagnosis! This makes perfect sense and clarifies the entire situation.

Based on your findings that the root cause is in how llm-compressor generates the quantization_config, I agree that the fix does not belong in vLLM.

I will now close this Pull Request

@Flink-ddd Flink-ddd closed this Jun 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants