Skip to content

Conversation

DarkLight1337
Copy link
Member

This enables a bunch of tests to be run in V1

@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label May 21, 2025
@DarkLight1337 DarkLight1337 changed the title [Core] Enable apply_model to be run on V1 [Core] Support LLM.apply_model in V1 May 21, 2025
@mergify mergify bot added frontend multi-modality Related to multi-modality (#4194) v1 labels May 21, 2025
@DarkLight1337 DarkLight1337 changed the title [Core] Support LLM.apply_model in V1 [V1] Support LLM.apply_model May 21, 2025
@DarkLight1337
Copy link
Member Author

@youkaichao can you review this?

Comment on lines -132 to -137
def rpc_func(worker: WorkerBase) -> _R:
return func(worker.get_model())
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part has been moved inside WorkerBase

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
@DarkLight1337
Copy link
Member Author

And also @mgoin since this touches the quantization tests

Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Copy link
Member

@youkaichao youkaichao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM in general

Signed-off-by: DarkLight1337 <[email protected]>
@DarkLight1337
Copy link
Member Author

DarkLight1337 commented May 21, 2025

@njhill I'm unable to get tests/models/multimodal/generation/test_qwen2_vl.py::test_qwen2_vl_multiple_image_embeddings_input[10-128-half-size_factors1-Qwen/Qwen2-VL-2B-Instruct] to pass - the output of apply_model only has the tensor's dtype and shape rather than the tensor data. I think this is related to the msgspec encoding/decoding logic. Can you help take a look?

Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks reasonable to me. A section in the docs or example script would be useful to demonstrate the interface

@mergify
Copy link

mergify bot commented Aug 26, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @DarkLight1337.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Aug 26, 2025
@patrick-toulme
Copy link
Contributor

Why has this not been merged? vLLM has no way right now to easily access the underlying model. That is a rather basic feature.

@DarkLight1337
Copy link
Member Author

There is an issue with the msgspec serialization that needs to be fixed by @njhill before this PR can be merged.

Copy link
Member Author

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @WoosukKwon heads-up that we need to merge this PR before V1 can be removed entirely in order to keep the quantization tests

@njhill
Copy link
Member

njhill commented Sep 19, 2025

@DarkLight1337 OK I finally got to this! #25294

@aarnphm aarnphm merged commit 3d9a1d2 into vllm-project:main Sep 20, 2025
50 checks passed
@DarkLight1337 DarkLight1337 deleted the apply-model-v1 branch September 20, 2025 07:15
@fxmarty-amd
Copy link
Contributor

@DarkLight1337 are you familiar with errors as: tests/kernels/moe/test_ocp_mx_moe.py::test_mxfp4_loading_and_execution_moe[model_case0] - Exception: Call to collective_rpc method failed: Can't get local object 'test_mxfp4_loading_and_execution_moe.<locals>.check_model'? How should we fix?

@fxmarty-amd
Copy link
Contributor

e.g. in

with vllm_runner(model_case.model_id,
tensor_parallel_size=model_case.tp,
load_format="dummy") as llm:
def check_model(model):
layer = model.model.layers[0]
qkv_proj = layer.self_attn.qkv_proj
assert isinstance(qkv_proj.quant_method, QuarkLinearMethod)
assert isinstance(qkv_proj.scheme, QuarkW4A4MXFP4)
assert isinstance(layer.mlp.experts.quant_method,
QuarkW4A4MXFp4MoEMethod)
if model_case.model_id == "fxmarty/qwen_1.5-moe-a2.7b-mxfp4":
llm.apply_model(check_model)

@DarkLight1337
Copy link
Member Author

Try moving the imports inside the inner function

@fxmarty-amd
Copy link
Contributor

Do you mean QuarkOCP_MX_MoEMethod, QuarkLinearMethod, QuarkOCP_MX imports? It does not seem to work, I'll disable for now

@DarkLight1337
Copy link
Member Author

Can you show the full stack trace of the error?

FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
charlifu pushed a commit to ROCm/vllm that referenced this pull request Sep 25, 2025
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: charlifu <[email protected]>
yewentao256 pushed a commit that referenced this pull request Oct 3, 2025
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: xuebwang-amd <[email protected]>
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend multi-modality Related to multi-modality (#4194) performance Performance-related issues qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants