Skip to content

Conversation

ywang96
Copy link
Member

@ywang96 ywang96 commented Apr 7, 2025

We need to upgrade the transformers version to 4.51.0 so that Llama-4 can work in our nightly after #16113 is merged. However, Ultravox impl on huggingface is currently breaking on 4.51.0 from the standard model test, so this PR sets the max transformers version for this model to unblock other PRs.

I have verified the same test works on 4.50.3

cc @farzadab - could you maybe take a look and see if the model is still working on the latest transformers version?

Signed-off-by: Roger Wang <[email protected]>
@ywang96 ywang96 requested a review from DarkLight1337 as a code owner April 7, 2025 03:28
Copy link

github-actions bot commented Apr 7, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@ywang96
Copy link
Member Author

ywang96 commented Apr 7, 2025

Full trace:

tests/models/decoder_only/audio_language/test_ultravox.py:126: in run_test
    hf_model.generate_greedy_logprobs_limit(          <----- huggingface impl is broken
tests/conftest.py:585: in generate_greedy_logprobs_limit
    output = self.model.generate(
/tmp-nvme/vllm/lib/python3.12/site-packages/torch/utils/_contextlib.py:116: in decorate_context
    return func(*args, **kwargs)
/tmp-nvme/vllm/lib/python3.12/site-packages/transformers/generation/utils.py:2460: in generate
    result = self._sample(
/tmp-nvme/vllm/lib/python3.12/site-packages/transformers/generation/utils.py:3426: in _sample
    outputs = self(**model_inputs, return_dict=True)
/tmp-nvme/vllm/lib/python3.12/site-packages/torch/nn/modules/module.py:1739: in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
/tmp-nvme/vllm/lib/python3.12/site-packages/torch/nn/modules/module.py:1750: in _call_impl
    return forward_call(*args, **kwargs)
../.cache/huggingface/modules/transformers_modules/fixie-ai/ultravox-v0_5-llama-3_2-1b/06c7f4eb509f60ce5f03563c0514756ebf357d39/ultravox_model.py:226: in forward
    audio_tower_output = self.audio_tower.forward(
../.cache/huggingface/modules/transformers_modules/fixie-ai/ultravox-v0_5-llama-3_2-1b/06c7f4eb509f60ce5f03563c0514756ebf357d39/ultravox_model.py:674: in forward
    inputs_embeds = nn.functional.gelu(self.conv1(input_features))
/tmp-nvme/vllm/lib/python3.12/site-packages/torch/nn/modules/module.py:1739: in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
/tmp-nvme/vllm/lib/python3.12/site-packages/torch/nn/modules/module.py:1750: in _call_impl
    return forward_call(*args, **kwargs)
/tmp-nvme/vllm/lib/python3.12/site-packages/torch/nn/modules/conv.py:375: in forward
    return self._conv_forward(input, self.weight, self.bias)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = Conv1d(128, 1280, kernel_size=(3,), stride=(1,), padding=(1,))
input = tensor([[[-0.5781, -0.5781, -0.5781,  ..., -0.5781, -0.5781, -0.5781],
         [-0.5781, -0.5781, -0.5781,  ..., -0.5...
         [-0.5781, -0.5781, -0.5781,  ..., -0.5781, -0.5781, -0.5781]]],
       device='cuda:0', dtype=torch.bfloat16)
weight = Parameter containing:
tensor([[[-1.9646e-04, -4.3297e-04,  1.1902e-03],
         [ 1.3062e-02,  1.4893e-02,  1.4404e-0...     [-2.1973e-03, -4.4861e-03, -1.1673e-03],
         [ 7.8583e-04, -4.2114e-03,  1.0834e-03]]], dtype=torch.bfloat16)
bias = Parameter containing:
tensor([ 0.0654,  0.0085,  0.0076,  ..., -0.0023,  0.0535,  0.0216],
       dtype=torch.bfloat16)

    def _conv_forward(self, input: Tensor, weight: Tensor, bias: Optional[Tensor]):
        if self.padding_mode != "zeros":
            return F.conv1d(
                F.pad(
                    input, self._reversed_padding_repeated_twice, mode=self.padding_mode
                ),
                weight,
                bias,
                self.stride,
                _single(0),
                self.dilation,
                self.groups,
            )
>       return F.conv1d(
            input, weight, bias, self.stride, self.padding, self.dilation, self.groups
        )
E       RuntimeError: Input type (CUDABFloat16Type) and weight type (CPUBFloat16Type) should be the same

@ywang96 ywang96 removed the request for review from DarkLight1337 April 7, 2025 03:29
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) April 7, 2025 03:30
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 7, 2025
@DarkLight1337 DarkLight1337 merged commit bb8dab8 into main Apr 7, 2025
33 of 35 checks passed
@DarkLight1337 DarkLight1337 deleted the skip-ultravox branch April 7, 2025 04:37
lengrongfu pushed a commit to lengrongfu/vllm that referenced this pull request Apr 7, 2025
@farzadab
Copy link
Contributor

farzadab commented Apr 9, 2025

Thanks for reporting. This should be fixed now.

@DarkLight1337
Copy link
Member

Thanks, I have opened #16381 to renable the test

yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Apr 21, 2025
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants