Skip to content

Conversation

russellb
Copy link
Member

The previous default was xgrammar and users had to opt-in to fallback
behavior. After more thought, auto seems like a better default as it
lets us do our best to satisfy all requests. Users can still pin vllm to
a single backend if desired.

Signed-off-by: Russell Bryant [email protected]

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@russellb russellb requested a review from Copilot March 28, 2025 20:23
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request updates the default backend for guided decoding by conditionally setting it based on the environment variable VLLM_USE_V1. It aims to switch from a fixed default of "xgrammar" to using "auto" by default for V1, while still supporting "xgrammar" for V0.

  • Conditional default value for the guided decoding backend is introduced.
  • Help message updated to reflect the conditional default behavior.
Comments suppressed due to low confidence (2)

vllm/engine/arg_utils.py:394

  • The conditional default logic relies on envs.VLLM_USE_V1 being exactly '0' to select 'xgrammar'. Please verify that this condition covers all expected runtime scenarios and that the environment variable is consistently defined.
default="xgrammar" if envs.VLLM_USE_V1 == "0" else "auto",

vllm/engine/arg_utils.py:403

  • [nitpick] Consider clarifying the help message by explicitly mentioning that the default backend depends on the value of envs.VLLM_USE_V1, which may help reduce confusion for users.
'The default is auto for V1 and xgrammar for V0.')

@russellb russellb force-pushed the v1-structured-output-default-auto branch from d33b337 to 935b870 Compare March 28, 2025 21:39
Copy link

mergify bot commented Mar 28, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @russellb.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Copy link
Member

@hmellor hmellor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would only change the default behaviour if vLLM is started via CLI. The default for LLM would still be xgrammar.

This is a bit of a general gotcha with the way we define defaults. They can either be defined in:

  • SomethingConfig - as a default in the dataclass
  • EngineArgs - as a default in this dataclass
  • EngineArgs.add_cli_args - as a default value for an argparse argument

It's not something for this PR, but we should probably decide where all defaults should live and do a little refactor.

@russellb
Copy link
Member Author

russellb commented Apr 8, 2025

@hmellor Thank you! Great catch. 100% agree with the need to refactor this ... very confusing

@russellb russellb force-pushed the v1-structured-output-default-auto branch from fc4865a to 411dfe9 Compare April 8, 2025 18:43
@russellb russellb requested a review from hmellor April 8, 2025 18:43
@russellb russellb requested a review from mgoin as a code owner April 8, 2025 19:21
vllm/config.py Outdated
Comment on lines 2891 to 2893
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh and instead of the comment you could type hint it with Literal["auto", "outlines", "lm-format-enforcer", "xgrammar"]. It makes the line longer, but it's very IDE friendly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's tricky because the valid set of values is different between V0 and V1.

@hmellor
Copy link
Member

hmellor commented Apr 9, 2025

I've made #16332 which begins to address our config code. It only covers ParallelConfig for now, but once we settle on a design, it can easily be extended to the other config classes.

The previous default was `xgrammar` and users had to opt-in to fallback
behavior. After more thought, `auto` seems like a better default as it
lets us do our best to satisfy all requests. Users can still pin vllm to
a single backend if desired.

Make `auto` work for V0 in case it gets specified there, as well.

Signed-off-by: Russell Bryant <[email protected]>
@russellb russellb force-pushed the v1-structured-output-default-auto branch from c048682 to 901d705 Compare April 9, 2025 17:47
@russellb russellb requested a review from hmellor April 9, 2025 17:47
Copy link
Member

@hmellor hmellor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@hmellor hmellor enabled auto-merge (squash) April 10, 2025 11:14
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 10, 2025
These tests now run against V1 in CI, but the code was originally
written with V0 in mind. In particular, specifying the backend
through the OpenAI API is not supported with V1, so we can remove
it and speed up the tests quite a bit. Different backends are tested in
another place (via the llm entrypoint tests).

Signed-off-by: Russell Bryant <[email protected]>
@hmellor hmellor merged commit 9665313 into vllm-project:main Apr 10, 2025
46 checks passed
@alesalloum
Copy link

Hey, this broke my offline inference scripts with structured outputs which worked perfectly in the last release 0.8.3. I tried to switch the backend to 'auto', but I still get the same error. How should one adapt their script for this change? Cheers.

ValueError: Request-level structured output backend must match engine-level backend. xgrammar != auto

@hmellor
Copy link
Member

hmellor commented Apr 16, 2025

That error indicates that the engine wants to use auto (the default set in this PR) but your calls to generate are setting it to xgrammar.

Can you try not setting the backend in LLM.generate?

If this doesn't work, please provide a minimal script so that I can try and reproduce the behaviour.

@DarkLight1337
Copy link
Member

I think the problem here is that "auto" should not prevent users from setting specific backends.

@hmellor
Copy link
Member

hmellor commented Apr 16, 2025

The error @alesalloum sees does not come from this PR, it was added in #14694. In that PR the decision was made that users cannot use arbitrary backends at runtime.

At startup they either leave it as auto and let vLLM decide which backend to use, or they pass a non-auto value and vLLM will always use that backend. If the user tries to set the backend at runtime they will see the error added in the PR linked above.

@DarkLight1337
Copy link
Member

Oh I see. Thanks for the explanation. I think we should have a clearer error message to notify users that they are not supposed to set the backend, and add a deprecation warning for those who are currently setting the backend (even if it's the same as the one on startup).

@hmellor
Copy link
Member

hmellor commented Apr 16, 2025

Ok, I can make a PR for that.

Should we lower the raise to a warn-and-ignore with more explanation?

@hmellor
Copy link
Member

hmellor commented Apr 16, 2025

Here's the PR #16717

yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Apr 21, 2025
jikunshang pushed a commit to jikunshang/vllm that referenced this pull request Apr 29, 2025
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed structured-output

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants