Skip to content

Commit fb87bfb

Browse files
lucianommartinskingsmad
authored andcommitted
[Model] Revert PR vllm-project#26715: Restore custom PaliGemma and Gemma3-MM impl… (vllm-project#27309)
Signed-off-by: Luciano Martins <[email protected]> Co-authored-by: Luciano Martins <[email protected]>
1 parent e8c93f7 commit fb87bfb

File tree

12 files changed

+1219
-54
lines changed

12 files changed

+1219
-54
lines changed

docs/models/hardware_supported_models/tpu.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,8 @@
1616
| meta-llama/Llama-4-* | Llama4ForConditionalGeneration ||
1717
| microsoft/Phi-3-mini-128k-instruct | Phi3ForCausalLM | 🟨 |
1818
| microsoft/phi-4 | Phi3ForCausalLM ||
19-
| google/gemma-3-27b-it | TransformersForMultimodalLM | 🟨 |
20-
| google/gemma-3-4b-it | TransformersForMultimodalLM ||
19+
| google/gemma-3-27b-it | Gemma3ForConditionalGeneration | 🟨 |
20+
| google/gemma-3-4b-it | Gemma3ForConditionalGeneration ||
2121
| deepseek-ai/DeepSeek-R1 | DeepseekV3ForCausalLM ||
2222
| deepseek-ai/DeepSeek-V3 | DeepseekV3ForCausalLM ||
2323
| RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w8a8 | LlamaForCausalLM ||

docs/models/supported_models.md

Lines changed: 20 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -642,6 +642,7 @@ These models primarily accept the [`LLM.generate`](./generative_models.md#llmgen
642642
| `DeepseekOCRForCausalLM` | DeepSeek-OCR | T + I<sup>+</sup> | `deepseek-ai/DeepSeek-OCR`, etc. | | ✅︎ |
643643
| `Ernie4_5_VLMoeForConditionalGeneration` | Ernie4.5-VL | T + I<sup>+</sup>/ V<sup>+</sup> | `baidu/ERNIE-4.5-VL-28B-A3B-PT`, `baidu/ERNIE-4.5-VL-424B-A47B-PT` | | ✅︎ |
644644
| `FuyuForCausalLM` | Fuyu | T + I | `adept/fuyu-8b`, etc. | | ✅︎ |
645+
| `Gemma3ForConditionalGeneration` | Gemma 3 | T + I<sup>+</sup> | `google/gemma-3-4b-it`, `google/gemma-3-27b-it`, etc. | ✅︎ | ✅︎ |
645646
| `Gemma3nForConditionalGeneration` | Gemma 3n | T + I + A | `google/gemma-3n-E2B-it`, `google/gemma-3n-E4B-it`, etc. | | |
646647
| `GLM4VForCausalLM`<sup>^</sup> | GLM-4V | T + I | `zai-org/glm-4v-9b`, `zai-org/cogagent-9b-20241220`, etc. | ✅︎ | ✅︎ |
647648
| `Glm4vForConditionalGeneration` | GLM-4.1V-Thinking | T + I<sup>E+</sup> + V<sup>E+</sup> | `zai-org/GLM-4.1V-9B-Thinking`, etc. | ✅︎ | ✅︎ |
@@ -671,6 +672,7 @@ These models primarily accept the [`LLM.generate`](./generative_models.md#llmgen
671672
| `NVLM_D_Model` | NVLM-D 1.0 | T + I<sup>+</sup> | `nvidia/NVLM-D-72B`, etc. | | ✅︎ |
672673
| `Ovis` | Ovis2, Ovis1.6 | T + I<sup>+</sup> | `AIDC-AI/Ovis2-1B`, `AIDC-AI/Ovis1.6-Llama3.2-3B`, etc. | | ✅︎ |
673674
| `Ovis2_5` | Ovis2.5 | T + I<sup>+</sup> + V | `AIDC-AI/Ovis2.5-9B`, etc. | | |
675+
| `PaliGemmaForConditionalGeneration` | PaliGemma, PaliGemma 2 | T + I<sup>E</sup> | `google/paligemma-3b-pt-224`, `google/paligemma-3b-mix-224`, `google/paligemma2-3b-ft-docci-448`, etc. | | ✅︎ |
674676
| `Phi3VForCausalLM` | Phi-3-Vision, Phi-3.5-Vision | T + I<sup>E+</sup> | `microsoft/Phi-3-vision-128k-instruct`, `microsoft/Phi-3.5-vision-instruct`, etc. | | ✅︎ |
675677
| `Phi4MMForCausalLM` | Phi-4-multimodal | T + I<sup>+</sup> / T + A<sup>+</sup> / I<sup>+</sup> + A<sup>+</sup> | `microsoft/Phi-4-multimodal-instruct`, etc. | ✅︎ | ✅︎ |
676678
| `Phi4MultimodalForCausalLM` | Phi-4-multimodal (HF Transformers) | T + I<sup>+</sup> / T + A<sup>+</sup> / I<sup>+</sup> + A<sup>+</sup> | `microsoft/Phi-4-multimodal-instruct` (with revision `refs/pr/70`), etc. | ✅︎ | ✅︎ |
@@ -695,8 +697,6 @@ Some models are supported only via the [Transformers backend](#transformers). Th
695697
| Architecture | Models | Inputs | Example HF Models | [LoRA](../features/lora.md) | [PP](../serving/parallelism_scaling.md) |
696698
|--------------|--------|--------|-------------------|-----------------------------|-----------------------------------------|
697699
| `Emu3ForConditionalGeneration` | Emu3 | T + I | `BAAI/Emu3-Chat-hf` | ✅︎ | ✅︎ |
698-
| `Gemma3ForConditionalGeneration` | Gemma 3 | T + I<sup>+</sup> | `google/gemma-3-4b-it`, `google/gemma-3-27b-it`, etc. | ✅︎ | ✅︎ |
699-
| `PaliGemmaForConditionalGeneration` | PaliGemma, PaliGemma 2 | T + I<sup>E</sup> | `google/paligemma-3b-pt-224`, `google/paligemma-3b-mix-224`, `google/paligemma2-3b-ft-docci-448`, etc. | ✅︎ | ✅︎ |
700700

701701
<sup>^</sup> You need to set the architecture name via `--hf-overrides` to match the one in vLLM.
702702
&nbsp;&nbsp;&nbsp;&nbsp;• For example, to use DeepSeek-VL2 series models:
@@ -705,7 +705,21 @@ Some models are supported only via the [Transformers backend](#transformers). Th
705705
<sup>+</sup> Multiple items can be inputted per text prompt for this modality.
706706

707707
!!! warning
708-
For `Gemma3ForConditionalGeneration`, `{"do_pan_and_scan": true}` is not supported in Transformers backend yet.
708+
Both V0 and V1 support `Gemma3ForConditionalGeneration` for text-only inputs.
709+
However, there are differences in how they handle text + image inputs:
710+
711+
V0 correctly implements the model's attention pattern:
712+
- Uses bidirectional attention between the image tokens corresponding to the same image
713+
- Uses causal attention for other tokens
714+
- Implemented via (naive) PyTorch SDPA with masking tensors
715+
- Note: May use significant memory for long prompts with image
716+
717+
V1 currently uses a simplified attention pattern:
718+
- Uses causal attention for all tokens, including image tokens
719+
- Generates reasonable outputs but does not match the original model's attention for text + image inputs, especially when `{"do_pan_and_scan": true}`
720+
- Will be updated in the future to support the correct behavior
721+
722+
This limitation exists because the model's mixed attention pattern (bidirectional for images, causal otherwise) is not yet supported by vLLM's attention backends.
709723

710724
!!! note
711725
`Gemma3nForConditionalGeneration` is only supported on V1 due to shared KV caching and it depends on `timm>=1.0.17` to make use of its
@@ -757,6 +771,9 @@ Some models are supported only via the [Transformers backend](#transformers). Th
757771
The official `openbmb/MiniCPM-V-2` doesn't work yet, so we need to use a fork (`HwwwH/MiniCPM-V-2`) for now.
758772
For more details, please see: <https://github.com/vllm-project/vllm/pull/4087#issuecomment-2250397630>
759773

774+
!!! warning
775+
Our PaliGemma implementations have the same problem as Gemma 3 (see above) for both V0 and V1.
776+
760777
!!! note
761778
For Qwen2.5-Omni and Qwen3-Omni, reading audio from video pre-processing (`--mm-processor-kwargs '{"use_audio_in_video": true}'`) is currently work in progress and not yet supported.
762779

examples/offline_inference/vision_language.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -319,8 +319,7 @@ def run_gemma3(questions: list[str], modality: str) -> ModelRequestData:
319319
model=model_name,
320320
max_model_len=2048,
321321
max_num_seqs=2,
322-
# TODO: Support this in transformers backend
323-
# mm_processor_kwargs={"do_pan_and_scan": True},
322+
mm_processor_kwargs={"do_pan_and_scan": True},
324323
limit_mm_per_prompt={modality: 1},
325324
)
326325

tests/models/language/generation/test_gemma.py

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
import numpy as np
44
import pytest
55

6-
MODELS = ["google/gemma-2b", "google/gemma-2-2b"]
6+
MODELS = ["google/gemma-2b", "google/gemma-2-2b", "google/gemma-3-4b-it"]
77

88

99
@pytest.mark.parametrize("model", MODELS)
@@ -14,8 +14,14 @@ def test_dummy_loader(vllm_runner, monkeypatch, model: str) -> None:
1414
model,
1515
load_format="dummy",
1616
) as llm:
17-
normalizers = llm.apply_model(
18-
lambda model: model.model.normalizer.cpu().item()
19-
)
20-
config = llm.llm.llm_engine.model_config.hf_config
17+
if model == "google/gemma-3-4b-it":
18+
normalizers = llm.llm.collective_rpc(
19+
lambda self: self.model_runner.model.language_model.model.normalizer.cpu().item() # noqa: E501
20+
)
21+
config = llm.llm.llm_engine.model_config.hf_config.text_config
22+
else:
23+
normalizers = llm.llm.collective_rpc(
24+
lambda self: self.model_runner.model.model.normalizer.cpu().item()
25+
)
26+
config = llm.llm.llm_engine.model_config.hf_config
2127
assert np.allclose(normalizers, config.hidden_size**0.5, rtol=2e-3)

tests/models/multimodal/generation/test_common.py

Lines changed: 40 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -112,6 +112,25 @@
112112
vllm_runner_kwargs={"enable_mm_embeds": True},
113113
marks=[pytest.mark.core_model, pytest.mark.cpu_model],
114114
),
115+
"paligemma": VLMTestInfo(
116+
models=["google/paligemma-3b-mix-224"],
117+
test_type=VLMTestType.IMAGE,
118+
prompt_formatter=identity,
119+
img_idx_to_prompt=lambda idx: "",
120+
# Paligemma uses its own sample prompts because the default one fails
121+
single_image_prompts=IMAGE_ASSETS.prompts(
122+
{
123+
"stop_sign": "caption es",
124+
"cherry_blossom": "What is in the picture?",
125+
}
126+
),
127+
auto_cls=AutoModelForImageTextToText,
128+
vllm_output_post_proc=model_utils.paligemma_vllm_to_hf_output,
129+
dtype="bfloat16",
130+
marks=[
131+
pytest.mark.skip(reason="vLLM does not support PrefixLM attention mask")
132+
],
133+
),
115134
"qwen2_5_vl": VLMTestInfo(
116135
models=["Qwen/Qwen2.5-VL-3B-Instruct"],
117136
test_type=(VLMTestType.IMAGE, VLMTestType.MULTI_IMAGE, VLMTestType.VIDEO),
@@ -176,24 +195,14 @@
176195
# Gemma3 has bidirectional mask on images
177196
"gemma3-transformers": VLMTestInfo(
178197
models=["google/gemma-3-4b-it"],
179-
test_type=(VLMTestType.IMAGE, VLMTestType.MULTI_IMAGE),
180-
prompt_formatter=lambda img_prompt: f"<bos><start_of_turn>user\n{img_prompt}<end_of_turn>\n<start_of_turn>model\n", # noqa: E501
181-
single_image_prompts=IMAGE_ASSETS.prompts(
182-
{
183-
"stop_sign": "<start_of_image>What's the content in the center of the image?", # noqa: E501
184-
"cherry_blossom": "<start_of_image>What is the season?",
185-
}
186-
),
187-
multi_image_prompt="<start_of_image><start_of_image>Describe the two images in detail.", # noqa: E501
188-
max_model_len=8192,
198+
test_type=VLMTestType.IMAGE,
199+
prompt_formatter=lambda vid_prompt: f"<'<bos><start_of_turn>user\n{vid_prompt}<start_of_image><end_of_turn>\n<start_of_turn>model\n", # noqa: E501
200+
max_model_len=4096,
189201
auto_cls=AutoModelForImageTextToText,
190-
# TODO: Support `do_pan_and_scan` in transformers backend
191-
# patch_hf_runner=model_utils.gemma3_patch_hf_runner,
192202
vllm_output_post_proc=model_utils.gemma3_vllm_to_hf_output,
193203
image_size_factors=[(0.25, 0.5, 1.0)],
194204
vllm_runner_kwargs={
195205
"model_impl": "transformers",
196-
# "mm_processor_kwargs": {"do_pan_and_scan": True},
197206
},
198207
marks=[pytest.mark.core_model],
199208
),
@@ -212,27 +221,6 @@
212221
},
213222
marks=[pytest.mark.core_model],
214223
),
215-
# PaliGemma has PrefixLM attention
216-
"paligemma-transformers": VLMTestInfo(
217-
models=["google/paligemma-3b-mix-224"],
218-
test_type=VLMTestType.IMAGE,
219-
prompt_formatter=identity,
220-
img_idx_to_prompt=lambda idx: "",
221-
# PaliGemma uses its own sample prompts because the default one fails
222-
single_image_prompts=IMAGE_ASSETS.prompts(
223-
{
224-
"stop_sign": "caption es",
225-
"cherry_blossom": "What is in the picture?",
226-
}
227-
),
228-
auto_cls=AutoModelForImageTextToText,
229-
vllm_output_post_proc=model_utils.paligemma_vllm_to_hf_output,
230-
image_size_factors=[(0.25, 0.5, 1.0)],
231-
vllm_runner_kwargs={
232-
"model_impl": "transformers",
233-
},
234-
marks=[pytest.mark.core_model],
235-
),
236224
# Pixel values from processor are not 4D or 5D arrays
237225
"qwen2_5_vl-transformers": VLMTestInfo(
238226
models=["Qwen/Qwen2.5-VL-3B-Instruct"],
@@ -359,6 +347,24 @@
359347
image_size_factors=[(), (0.25,), (0.25, 0.25, 0.25), (0.25, 0.2, 0.15)],
360348
marks=[large_gpu_mark(min_gb=32)],
361349
),
350+
"gemma3": VLMTestInfo(
351+
models=["google/gemma-3-4b-it"],
352+
test_type=(VLMTestType.IMAGE, VLMTestType.MULTI_IMAGE),
353+
prompt_formatter=lambda img_prompt: f"<bos><start_of_turn>user\n{img_prompt}<end_of_turn>\n<start_of_turn>model\n", # noqa: E501
354+
single_image_prompts=IMAGE_ASSETS.prompts(
355+
{
356+
"stop_sign": "<start_of_image>What's the content in the center of the image?", # noqa: E501
357+
"cherry_blossom": "<start_of_image>What is the season?",
358+
}
359+
),
360+
multi_image_prompt="<start_of_image><start_of_image>Describe the two images in detail.", # noqa: E501
361+
max_model_len=4096,
362+
max_num_seqs=2,
363+
auto_cls=AutoModelForImageTextToText,
364+
vllm_runner_kwargs={"mm_processor_kwargs": {"do_pan_and_scan": True}},
365+
patch_hf_runner=model_utils.gemma3_patch_hf_runner,
366+
num_logprobs=10,
367+
),
362368
"glm4v": VLMTestInfo(
363369
models=["zai-org/glm-4v-9b"],
364370
test_type=VLMTestType.IMAGE,

tests/models/multimodal/generation/vlm_utils/model_utils.py

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -328,6 +328,16 @@ def processor(*args, **kwargs):
328328

329329
hf_model.processor = processor
330330

331+
orig_generate = hf_model.model.generate
332+
333+
def _generate(self, *args, **kwargs):
334+
# FIXME: https://github.com/huggingface/transformers/issues/38333
335+
kwargs["disable_compile"] = True
336+
337+
return orig_generate(*args, **kwargs)
338+
339+
hf_model.model.generate = types.MethodType(_generate, hf_model.model)
340+
331341
return hf_model
332342

333343

tests/models/multimodal/processing/test_common.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -222,6 +222,7 @@ def _to_dummy_options(modality: str, count: int) -> BaseDummyOptions:
222222
_ADD_SPECIAL_TOKENS_OVERRIDES = {
223223
"ovis": False,
224224
"ovis2_5": False,
225+
"paligemma": False,
225226
"ultravox": False,
226227
"whisper": False,
227228
}
@@ -333,6 +334,7 @@ def _test_processing_correctness_one(
333334
"deepseek-ai/deepseek-vl2-tiny",
334335
"baidu/ERNIE-4.5-VL-28B-A3B-PT",
335336
"adept/fuyu-8b",
337+
"google/gemma-3-4b-it",
336338
"google/gemma-3n-E2B-it",
337339
"zai-org/glm-4v-9b",
338340
"zai-org/GLM-4.1V-9B-Thinking",
@@ -369,6 +371,8 @@ def _test_processing_correctness_one(
369371
"AIDC-AI/Ovis1.6-Llama3.2-3B",
370372
"AIDC-AI/Ovis2-1B",
371373
"AIDC-AI/Ovis2.5-2B",
374+
"google/paligemma-3b-mix-224",
375+
"google/paligemma2-3b-ft-docci-448",
372376
"microsoft/Phi-3.5-vision-instruct",
373377
"microsoft/Phi-4-multimodal-instruct",
374378
"mistralai/Pixtral-12B-2409",

tests/models/multimodal/processing/test_tensor_schema.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,7 @@
4848
"Idefics3ForConditionalGeneration",
4949
"LlavaForConditionalGeneration",
5050
"MiniCPMV",
51+
"PaliGemmaForConditionalGeneration",
5152
]
5253
REPO_ID_TO_SKIP = {
5354
"nm-testing/pixtral-12b-FP8-dynamic": "duplicated test",

0 commit comments

Comments
 (0)