Skip to content

Conversation

@farzadab
Copy link
Contributor

@farzadab farzadab commented May 7, 2025

This is a simplified version of my older PR that was approved by @DarkLight1337 but ended up not working on some backends: https://github.com/vllm-project/vllm/pull/15728/files
This new PR allows Ultravox to support Gemma 3 and Llama 4 backends.

On the Ultravox side, I've made sure that all tokenizers have a new <|audio|> token to allow for better tracking audio placeholder tokens. This is only available on the tokenizer and not the embedding layer. As such, I intercept the input_ids before calling embedding on them and apply safe_input_ids instead.

When using V0, Ultravox has been verified to work on the following backends on an earlier version of this PR: Llama 3, Gemma 3, and Llama 4.

V0 seems to work as verified by evals. I've seen issues on V1 on an earlier version of VLLM, but I'm not sure if that was due to Ultravox or a VLLM V1 bug.

@github-actions
Copy link

github-actions bot commented May 7, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@DarkLight1337
Copy link
Member

What issue are you getting on V1?

Copy link
Contributor Author

@farzadab farzadab May 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DarkLight1337 When using V1, I noticed that the output was also completely garbled.

After debugging I noticed that when I tried printing input_ids here for the same sample (conditioned on len(input_ids)>1 to avoid decoding tokens), this is what I got:

# with VLLM_USE_V1=0
>>> t.decode([200000, 200005, 15651, 200006, 368, 4662, 583, 262, 19933, 43910, 26, 200008, 200005, 1556, 200006, 368, 4984, 290, 2182, 4097, 38, 7283, 201133, 200008, 200005, 140680, 200006, 368])
'<|begin_of_text|><|header_start|>system<|header_end|>\n\nYou are a helpful assistant.<|eot|><|header_start|>user<|header_end|>\n\nAnswer the following question: \n\n<|vision_reserved_special_token_1047|><|eot|><|header_start|>assistant<|header_end|>\n\n'

# with VLLM_USE_V1=1
>>> t.decode([24, 4984, 290, 2182, 4097, 38, 7283, 201133, 200008, 200005, 140680, 200006, 368])
',Answer the following question: \n\n<|vision_reserved_special_token_1047|><|eot|><|header_start|>assistant<|header_end|>\n\n'

The input_ids in the case of V1 seemed to be missing a part of the beginning.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe I got this issue at around v0.8.4 or 0.8.4. I'll try verifying it on v0.8.5.post1.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved by upgrading to v0.9.1

@liPatrick
Copy link
Contributor

Verified that the issue with inference mismatch was indeed a VLLM bug. Upgrading to v0.9.1 fixed the issue and now V1 inference matches V0.

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) June 12, 2025 02:19
@DarkLight1337
Copy link
Member

Nice, let's merge this!

@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 12, 2025
@mergify mergify bot added the llama Related to Llama models label Jun 23, 2025
@mergify
Copy link

mergify bot commented Jun 23, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @farzadab.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jun 23, 2025
auto-merge was automatically disabled June 23, 2025 20:03

Head branch was pushed to by a user without write access

@liPatrick liPatrick force-pushed the farzad-audiotoken-gemma3llama4 branch from 0ffff36 to 1cb823d Compare June 23, 2025 20:03
@mergify mergify bot removed the needs-rebase label Jun 23, 2025
@mergify mergify bot added the new-model Requests to new models label Jul 11, 2025
@mergify
Copy link

mergify bot commented Jul 17, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @farzadab.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 17, 2025
@Blaze-DSP
Copy link

Blaze-DSP commented Jul 23, 2025

Hi!! I am trying to deploy fixie-ai/ultravox-v0_6-gemma-3-27b using vllm (vllm/vllm-openai:v0.9.2) in k8s.

- name: vllm-openai
          image: vllm/vllm-openai:v0.9.2
          imagePullPolicy: IfNotPresent
          env:
            - name: HF_TOKEN
              valueFrom:
                secretKeyRef:
                  name: hf-token
                  key: hf_token
            - name: HF_HUB_CACHE
              value: /mnt/models
          command: ["/bin/sh", "-c"]
          args:
            - |
              pip install "vllm[audio]" \
                && python3 -m vllm.entrypoints.openai.api_server \
                    --host 0.0.0.0 \
                    --port 8000 \
                    --uvicorn-log-level warning \
                    --model fixie-ai/ultravox-v0_6-gemma-3-27b \
                    --served-model-name ultravox-27b \
                    --device auto \
                    --trust-remote-code \
                    --max-model-len 4096 \
                    --enable-prefix-caching 

But, the following error occurs regarding Gemma3 vocab_size. Can anyone help? Saw a related PR #14687 but it didn't help

INFO 07-23 03:42:46 [__init__.py:244] Automatically detected platform cuda.
INFO 07-23 03:42:51 [api_server.py:1395] vLLM API server version 0.9.2
INFO 07-23 03:42:51 [cli_args.py:325] non-default args: {'host': '0.0.0.0', 'uvicorn_log_level': 'warning', 'model': '/mnt/models/ultravox-27b', 'trust_remote_code': True, 'max_model_len': 4096, 'served_model_name': ['ultravox-27b'], 'enable_prefix_caching': True}
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1495, in <module>
    uvloop.run(run_server(args))
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
    return __asyncio.run(
           ^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
           ^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1431, in run_server
    await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1451, in run_server_worker
    async with build_async_engine_client(args, client_config) as engine_client:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 158, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 180, in build_async_engine_client_from_engine_args
    vllm_config = engine_args.create_engine_config(usage_context=usage_context)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 1067, in create_engine_config
    model_config = self.create_model_config()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 956, in create_model_config
    return ModelConfig(
           ^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/pydantic/_internal/_dataclasses.py", line 123, in __init__
    s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
  File "/usr/local/lib/python3.12/dist-packages/vllm/config.py", line 533, in __post_init__
    hf_config = get_config(self.hf_config_path or self.model,
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/config.py", line 361, in get_config
    config = config_class.from_pretrained(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py", line 586, in from_pretrained
    return cls.from_dict(config_dict, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py", line 746, in from_dict
    config = cls(**config_dict)
             ^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/configs/ultravox.py", line 104, in __init__
    self.vocab_size = self.text_config.vocab_size
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py", line 209, in __getattribute__
    return super().__getattribute__(key)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Gemma3Config' object has no attribute 'vocab_size'```

@mergify mergify bot removed the needs-rebase label Jul 23, 2025
@liPatrick liPatrick force-pushed the farzad-audiotoken-gemma3llama4 branch from b0f884a to fb57362 Compare July 23, 2025 19:14
"JAISLMHeadModel": ("jais", "JAISLMHeadModel"),
"JambaForCausalLM": ("jamba", "JambaForCausalLM"),
"LlamaForCausalLM": ("llama", "LlamaForCausalLM"),
"Llama4ForCausalLM": ("llama4", "Llama4ForCausalLM"), # noqa: E501
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like adding this leads to CI errors #19580

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea this is expected because on huggingface there's no such model for us to test loading this particular model architecture. I wonder if we should relax this test on CI

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's just add is_available_offline=False to disable the test for that model

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I added back Llama4ForCausalLM to vllm/model_executor/models/registry.py and added Llama4ForCausalLM with is_available_online=False entry to tests/models/registry.py

@liPatrick liPatrick force-pushed the farzad-audiotoken-gemma3llama4 branch from 41b0cff to 02c9ca8 Compare July 24, 2025 22:50
@DarkLight1337
Copy link
Member

The PP test failure looks related

…beddings for text only inputs for v0 path

Signed-off-by: Patrick Li <[email protected]>
Signed-off-by: Patrick Li <[email protected]>
@vllm-bot vllm-bot merged commit 62965de into vllm-project:main Jul 26, 2025
61 of 65 checks passed
liuyumoye pushed a commit to liuyumoye/vllm that referenced this pull request Jul 31, 2025
…17818)

Signed-off-by: Farzad Abdolhosseini <[email protected]>
Signed-off-by: Patrick Li <[email protected]>
Co-authored-by: Patrick Li <[email protected]>
HsChen-sys pushed a commit to HsChen-sys/vllm that referenced this pull request Aug 1, 2025
…17818)

Signed-off-by: Farzad Abdolhosseini <[email protected]>
Signed-off-by: Patrick Li <[email protected]>
Co-authored-by: Patrick Li <[email protected]>
x22x22 pushed a commit to x22x22/vllm that referenced this pull request Aug 5, 2025
…17818)

Signed-off-by: Farzad Abdolhosseini <[email protected]>
Signed-off-by: Patrick Li <[email protected]>
Co-authored-by: Patrick Li <[email protected]>
Signed-off-by: x22x22 <[email protected]>
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
…17818)

Signed-off-by: Farzad Abdolhosseini <[email protected]>
Signed-off-by: Patrick Li <[email protected]>
Co-authored-by: Patrick Li <[email protected]>
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
…17818)

Signed-off-by: Farzad Abdolhosseini <[email protected]>
Signed-off-by: Patrick Li <[email protected]>
Co-authored-by: Patrick Li <[email protected]>
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
…17818)

Signed-off-by: Farzad Abdolhosseini <[email protected]>
Signed-off-by: Patrick Li <[email protected]>
Co-authored-by: Patrick Li <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
…17818)

Signed-off-by: Farzad Abdolhosseini <[email protected]>
Signed-off-by: Patrick Li <[email protected]>
Co-authored-by: Patrick Li <[email protected]>
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
…17818)

Signed-off-by: Farzad Abdolhosseini <[email protected]>
Signed-off-by: Patrick Li <[email protected]>
Co-authored-by: Patrick Li <[email protected]>
Signed-off-by: Diego-Castan <[email protected]>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
…17818)

Signed-off-by: Farzad Abdolhosseini <[email protected]>
Signed-off-by: Patrick Li <[email protected]>
Co-authored-by: Patrick Li <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

llama Related to Llama models new-model Requests to new models ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants