Skip to content

Conversation

@youkaichao
Copy link
Member

the test is against vLLM without cpu offloading.

this test infra can be used for other tests as well.

Copy link
Collaborator

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@youkaichao youkaichao enabled auto-merge (squash) July 18, 2024 21:49
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 18, 2024
@youkaichao youkaichao merged commit f53b8f0 into vllm-project:main Jul 18, 2024
@youkaichao youkaichao deleted the offload_correctness branch July 21, 2024 03:03
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jul 24, 2024
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
LeiWang1999 pushed a commit to LeiWang1999/vllm-bitblas that referenced this pull request Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants