-
-
Notifications
You must be signed in to change notification settings - Fork 10.7k
[V1] Support LLM.apply_model
#18465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[V1] Support LLM.apply_model
#18465
Conversation
Signed-off-by: DarkLight1337 <[email protected]>
apply_model
to be run on V1LLM.apply_model
in V1
LLM.apply_model
in V1LLM.apply_model
@youkaichao can you review this? |
def rpc_func(worker: WorkerBase) -> _R: | ||
return func(worker.get_model()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part has been moved inside WorkerBase
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
And also @mgoin since this touches the quantization tests |
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM in general
Signed-off-by: DarkLight1337 <[email protected]>
@njhill I'm unable to get |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks reasonable to me. A section in the docs or example script would be useful to demonstrate the interface
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
This pull request has merge conflicts that must be resolved before it can be |
Why has this not been merged? vLLM has no way right now to easily access the underlying model. That is a rather basic feature. |
There is an issue with the msgspec serialization that needs to be fixed by @njhill before this PR can be merged. |
Signed-off-by: DarkLight1337 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @WoosukKwon heads-up that we need to merge this PR before V1 can be removed entirely in order to keep the quantization tests
@DarkLight1337 OK I finally got to this! #25294 |
@DarkLight1337 are you familiar with errors as: |
e.g. in vllm/tests/kernels/moe/test_mxfp4_moe.py Lines 63 to 79 in d83f3f7
|
Try moving the imports inside the inner function |
Do you mean |
Can you show the full stack trace of the error? |
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]> Signed-off-by: charlifu <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]> Signed-off-by: yewentao256 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]> Signed-off-by: xuebwang-amd <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
This enables a bunch of tests to be run in V1