-
Notifications
You must be signed in to change notification settings - Fork 566
[CI]Fix broken CI #1773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI]Fix broken CI #1773
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1773 +/- ##
===========================================
+ Coverage 27.39% 54.48% +27.09%
===========================================
Files 56 82 +26
Lines 6191 10063 +3872
===========================================
+ Hits 1696 5483 +3787
- Misses 4495 4580 +85
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| from typing_extensions import ParamSpec | ||
| from vllm.engine.arg_utils import AsyncEngineArgs | ||
| from vllm.entrypoints.openai.cli_args import make_arg_parser | ||
| from vllm.model_executor.model_loader import get_model_loader |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import vllm.model_executor.model_loader here will lead torch infer_schema error. Since prompt_embedding only works in V0. Let's remove releated test for quick fix.
Signed-off-by: wangxiyuan <[email protected]>
| scheduler.requests[req.request_id] = req | ||
| scheduler.running.append(req) | ||
| if not vllm_version_is("0.9.2"): | ||
| req.status = RequestStatus.RUNNING |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is adapted from vllm-project/vllm#20739
| from vllm_ascend.utils import adapt_patch # noqa E402 | ||
|
|
||
| adapt_patch(True) | ||
| adapt_patch(False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ensure torch patch is called.
Yikun
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merge to main to recover CI
This PR fixed the broken CI. It require vllm-project/vllm#20900 merged first. - vLLM version: v0.9.2 - vLLM main: vllm-project/vllm@e8cc53a Signed-off-by: wangxiyuan <[email protected]>
This PR fixed the broken CI. It require vllm-project/vllm#20900 merged first. - vLLM version: v0.9.2 - vLLM main: vllm-project/vllm@e8cc53a Signed-off-by: wangxiyuan <[email protected]>
This PR fixed the broken CI. It require vllm-project/vllm#20900 merged first.