Skip to content

Conversation

vllmellm
Copy link
Contributor

@vllmellm vllmellm commented Apr 17, 2025

Description

This PR integrates enables Aiter's fused Mixture-of-Experts ops, found here, to be used with v1.

Implementation

The following ops have been added/modified and registered as custom ops:

  1. rocm_aiter_ck_moe
  2. rocm_aiter_fmoe_fp8_blockscale_g1u1
  3. rocm_aiter_asm_moe
  4. rocm_aiter_topk_softmax
  5. rocm_aiter_shuffle_weight
  6. rocm_aiter_asm_moe_tkw1

Testing

The integration has been verified through:

  1. High-level integration tests with various models.
  2. Accuracy Test using Lmeval.

Accuracy Test GSM8K

The following command has been used to run Lmeval on the following models:

  • Llama-4-Maverick-17B-128E-Instruct
  • Llama-4-Maverick-17B-128E-Instruct-FP8
  • DeepSeek-V3
  • Mixtral-8x7B-Instruct-v0.1
  • Mixtral-8x7B-Instruct-v0.1(FP8)
VLLM_USE_TRITON_FLASH_ATTN=1 \
VLLM_WORKER_MULTIPROC_METHOD=spawn \
VLLM_ROCM_USE_AITER=0 \
VLLM_ROCM_USE_AITER_RMSNORM=0 \
VLLM_ROCM_USE_AITER_LINEAR=0 \
SAFETENSORS_FAST_GPU=1 \
lm_eval \
--model vllm \
--model_args pretrained=model_name,tensor_parallel_size=8,enforce_eager=False,max_model_len=4096 \
--trust_remote_code \
--tasks gsm8k \
--num_fewshot 5 \
--batch_size auto 

Additionally we set some addiational vars/args for some models as specified below:

Llama-4-Maverick-17B-128E-Instruct:

  • VLLM_USE_V1=1

Llama-4-Maverick-17B-128E-Instruct-FP8:

  • VLLM_USE_V1=1

DeepSeek-V3:

  • VLLM_USE_V1=0

Mixtral-8x7B-Instruct-v0.1:

  • VLLM_USE_V1=1

Mixtral-8x7B-Instruct-v0.1(FP8):

  • VLLM_USE_V1=1
  • --quantization fp8

We provide the table below to show the lm_eval results :

Model vLLM version Tasks Version Filter n-shot Metric   Value   Stderr
Llama-4-Maverick-17B-128E-Instruct-BF16 V1 gsm8k 3 flexible-extract 5 exact_match 0.9272 ± 0.0072
    strict-match 5 exact_match 0.9272 ± 0.0072
Llama-4-Maverick-17B-128E-Instruct-FP8 V1 gsm8k 3 flexible-extract 5 exact_match 0.9234 ± 0.0073
    strict-match 5 exact_match 0.9272 ± 0.0072
DeepSeek-V3 V0 gsm8k 3 flexible-extract 5 exact_match 0.9454 ± 0.063
    strict-match 5 exact_match 0.9454 ± 0.063
Mixtral-8x7B-Instruct-v0.1 V1 gsm8k 3 flexible-extract 5 exact_match 0.6452 ± 0.0132
    strict-match 5 exact_match 0.6429 ± 0.0132
Mixtral-8x7B-Instruct-v0.1 (FP8) V1 gsm8k 3 flexible-extract 5 exact_match 0.5413 ± 0.0137
    strict-match 5 exact_match 0.5398 ± 0.0137

This PR is part of a larger effort to integrate AITER kernels into vLLM for improved performance on the ROCm platform.

Co-authored-by: tjtanaa <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Signed-off-by: vllmellm <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Signed-off-by: vllmellm <[email protected]>
@vllmellm vllmellm marked this pull request as ready for review April 23, 2025 11:07
@hongxiayang
Copy link
Collaborator

cc @houseroad This enables AITER kennel Cudagraph mode for llama4 models in V1 for performance.

@hongxiayang hongxiayang added rocm Related to AMD ROCm v1 labels Apr 23, 2025
def rocm_aiter_ck_moe_fake(hidden_states: torch.Tensor, w1: torch.Tensor,
w2: torch.Tensor, topk_weights: torch.Tensor,
topk_ids: torch.Tensor) -> torch.Tensor:
return torch.empty((topk_ids.size(0), hidden_states.size(1)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.empty_like(hidden_states)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing this out. We have updated the code accordingly.

Comment on lines +55 to +72
def rocm_aiter_asm_moe_tkw1_fake(
hidden_states: torch.Tensor,
w1: torch.Tensor,
w2: torch.Tensor,
topk_weight: torch.Tensor,
topk_ids: torch.Tensor,
fc1_scale: Optional[torch.Tensor] = None,
fc2_scale: Optional[torch.Tensor] = None,
fc1_smooth_scale: Optional[torch.Tensor] = None,
fc2_smooth_scale: Optional[torch.Tensor] = None,
a16: bool = False,
per_tensor_quant_scale: Optional[torch.Tensor] = None,
expert_mask: Optional[torch.Tensor] = None,
activation_str: str = "silu") -> torch.Tensor:
return torch.empty_like(hidden_states)


def rocm_aiter_ck_moe_impl(hidden_states: torch.Tensor, w1: torch.Tensor,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ideally, we should have some comments to tell the use case for each kernel, like

  • asm_moe_tkw1: for w8a8
  • ck_moe: for w16a16
    what do you think?

a1_scale: torch.Tensor,
block_shape: List[int],
smooth_scale: Optional[torch.Tensor] = None) -> torch.Tensor:
from aiter.fused_moe_bf16_asm import moe_sorting_ck
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sry, is it possible to just return torch.empty_like(a1, dtype=torch.bf16)? any reason we need to call the moe_sorting_ck in the fake impl?


def rocm_aiter_shuffle_weight_impl(tensor: torch.Tensor) -> torch.Tensor:
from aiter.ops.shuffle import shuffle_weight
return shuffle_weight(tensor)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shuffle_weight is not a pybind kernel just a normal pytorch func, do we still need to register it as a custom op? : D

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shuffle_weight is not a pybind kernel just a normal pytorch func, do we still need to register it as a custom op? : D

good question

a1_scale: Optional[torch.Tensor] = None,
a2_scale: Optional[torch.Tensor] = None,
block_shape: Optional[List[int]] = None,
allow_deep_gemm: bool = False) -> torch.Tensor:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is allow_deep_gemm actually used?

Copy link
Contributor Author

@vllmellm vllmellm Apr 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just added for mypy.

return tensor


if current_platform.is_rocm():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we only register these custom_ops when VLLM_USE_V1=1 for V0 compatibility and performance reasons?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ops register under direct_register_custom_op are also compatible with V0.

Copy link
Collaborator

@hongxiayang hongxiayang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have verified the code end to end with llama4 fp8 E128 model. Looks good.

Approving this with comments.

@hongxiayang hongxiayang added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 24, 2025
@DarkLight1337 DarkLight1337 merged commit eef3647 into vllm-project:main Apr 25, 2025
65 checks passed
gshtras added a commit to ROCm/vllm that referenced this pull request Apr 25, 2025
* [BugFix] Remove default multiproc executor `collective_rpc` timeout (vllm-project#17000)

Signed-off-by: Nick Hill <[email protected]>

* [Core][V1][TPU] Enable structured decoding on TPU V1 (vllm-project#16499)

Signed-off-by: Chenyaaang <[email protected]>

* [Bugfix] validate urls object for multimodal content parts (vllm-project#16990)

Signed-off-by: Guillaume Calmettes <[email protected]>

* add Dockerfile build vllm against torch nightly (vllm-project#16936)

Signed-off-by: Yang Wang <[email protected]>

* [Kernel][ROCM] Upstream prefix prefill speed up for vLLM V1 (vllm-project#13305)

Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: <>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: qli88 <[email protected]>
Co-authored-by: root <[email protected]>

* [V1][DP] More robust DP/EP dummy request coordination (vllm-project#16277)

Signed-off-by: Nick Hill <[email protected]>

* [BugFix] Revert ROCm Custom Paged Attention Env Flag Check (vllm-project#17022)

Signed-off-by: vllmellm <[email protected]>

* Revert "[Misc] Add S3 environment variables for better support of MinIO." (vllm-project#17021)

* [misc] tune some env vars for GB200 (vllm-project#16992)

Signed-off-by: youkaichao <[email protected]>

* [INTEL-HPU][v0] Port delayed sampling to upstream (vllm-project#16949)

Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: Chendi Xue <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>

* [doc] add download path tips (vllm-project#17013)

Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>

* [Bugfix] Triton FA function takes no keyword arguments (vllm-project#16902)

Signed-off-by: vllmellm <[email protected]>

* [V1] Avoid socket errors during shutdown when requests are in in-flight (vllm-project#16807)

Signed-off-by: Nick Hill <[email protected]>

* [BugFix] llama4 fa3 fix - RuntimeError: scheduler_metadata must have shape (metadata_size) (vllm-project#16998)

Signed-off-by: Lucas Wilkinson <[email protected]>

* [Misc] Improve readability of get_open_port function. (vllm-project#17024)

Signed-off-by: gitover22 <[email protected]>

* [Bugfix] Fix AssertionError: skip_special_tokens=False is not supported for Mistral tokenizers (vllm-project#16964)

Signed-off-by: chaunceyjiang <[email protected]>

* [CI] Run v1/test_serial_utils.py in CI (vllm-project#16996)

Signed-off-by: Russell Bryant <[email protected]>

* Mistral-format support for compressed-tensors (vllm-project#16803)

Signed-off-by: mgoin <[email protected]>

* Categorize `tests/kernels/` based on kernel type (vllm-project#16799)

Signed-off-by: mgoin <[email protected]>

* [Doc] Add top anchor and a note to quantization/bitblas.md (vllm-project#17042)

Signed-off-by: windsonsea <[email protected]>

* Ensure that `pid` passed to `kill_process_tree` is `int` for `mypy` (vllm-project#17051)

Signed-off-by: Harry Mellor <[email protected]>

* [CI] Update structured-output label automation (vllm-project#17055)

Signed-off-by: Russell Bryant <[email protected]>

* Improve Transformers backend model loading QoL (vllm-project#17039)

Signed-off-by: Harry Mellor <[email protected]>

* `CacheConfig.block_size` should always be `int` when used (vllm-project#17052)

Signed-off-by: Harry Mellor <[email protected]>

* Use `@property` and private field for `data_parallel_rank_local` (vllm-project#17053)

Signed-off-by: Harry Mellor <[email protected]>

* [Frontend] Support guidance:no-additional-properties for compatibility with xgrammar (vllm-project#15949)

Signed-off-by: Travis Johnson <[email protected]>

* [BugFix][V1] Fix int32 token index overflow when preparing input ids (vllm-project#16806)

* [V1][Spec Decode] Always use argmax for sampling draft tokens  (vllm-project#16899)

Signed-off-by: Woosuk Kwon <[email protected]>

* [CI/Build] workaround for CI build failure (vllm-project#17070)

Signed-off-by: csy1204 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>

* [Quantization]add prefix for commandA quantized model (vllm-project#17017)

* [Minor] Use larger batch sizes for A100/B100/B200/MI300x (vllm-project#17073)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Bugfix] Enable V1 usage stats (vllm-project#16986)

Signed-off-by: mgoin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Nick Hill <[email protected]>

* More informative error when using Transformers backend (vllm-project#16988)

Signed-off-by: Harry Mellor <[email protected]>

* Addendum Fix to support FIPS enabled machines with MD5 hashing (vllm-project#17043)

Signed-off-by: sydarb <[email protected]>

* [Bugfix][Core] add seq_id_to_seq_group clearing to avoid memory leak when s… (vllm-project#16472)

Signed-off-by: 开哲 <[email protected]>
Co-authored-by: 开哲 <[email protected]>

* [V1] Update structured output (vllm-project#16812)

Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>

* [doc] update to hyperlink (vllm-project#17096)

Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>

* Add docs for runai_streamer_sharded (vllm-project#17093)

Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>

* [Chore] Remove Sampler from Model Code (vllm-project#17084)

Signed-off-by: Woosuk Kwon <[email protected]>

* Disable enforce_eager for V1 TPU sampler and structured output tests (vllm-project#17016)

Signed-off-by: mgoin <[email protected]>

* Simplify `TokenizerGroup` (vllm-project#16790)

Signed-off-by: Harry Mellor <[email protected]>

* Fix OOT registration test (vllm-project#17099)

Signed-off-by: Harry Mellor <[email protected]>

* [V1][PP] Optimization: continue scheduling prefill chunks (vllm-project#17080)

Signed-off-by: Rui Qiao <[email protected]>

* [Misc] Remove OLMo2 config copy (vllm-project#17066)

Signed-off-by: Isotr0py <[email protected]>

* Improve static type checking in `LoRAModelRunnerMixin` (vllm-project#17104)

Signed-off-by: Harry Mellor <[email protected]>

* [V1][Structured Output] Clear xgrammar compiler object when engine core shut down to avoid nanobind leaked warning (vllm-project#16954)

Signed-off-by: shen-shanshan <[email protected]>

* [Frontend] Using matryoshka_dimensions control the allowed output dimensions. (vllm-project#16970)

* Add missing rocm_skinny_gemms kernel test to CI (vllm-project#17060)

Signed-off-by: mgoin <[email protected]>

* [Misc] refactor example series - structured outputs (vllm-project#17040)

Signed-off-by: reidliu41 <[email protected]>
Co-authored-by: reidliu41 <[email protected]>

* [V1][Spec Decoding] Add num_drafts and num_accepted_tokens_per_position metrics (vllm-project#16665)

Signed-off-by: Mark McLoughlin <[email protected]>

* [CI] Add automation for the `tool-calling` github label (vllm-project#17118)

Signed-off-by: Russell Bryant <[email protected]>

* Updating builkite job for IBM Power  (vllm-project#17111)

Signed-off-by: Aaruni Aggarwal <[email protected]>

* existing torch installation pip command fix for docs (vllm-project#17059)

* Molmo Requirements (vllm-project#17026)

Signed-off-by: Eyshika Agarwal <[email protected]>
Signed-off-by: eyshika <[email protected]>

* Add `:markdownhelp:` to `EngineArgs` docs so markdown docstrings render properly (vllm-project#17124)

Signed-off-by: Harry Mellor <[email protected]>

* Improve configs - `LoRAConfig` + `PromptAdapterConfig` (vllm-project#16980)

Signed-off-by: Harry Mellor <[email protected]>

* [Docs] Generate correct github links for decorated functions (vllm-project#17125)

Signed-off-by: Russell Bryant <[email protected]>

* Add collective_rpc to llm engine (vllm-project#16999)

Signed-off-by: Yinghai Lu <[email protected]>

* Add chat template for Llama 4 models (vllm-project#16428)

Signed-off-by: Max de Bayser <[email protected]>

* [Misc] Add example to run DeepSeek with Ray Serve LLM (vllm-project#17134)

Signed-off-by: Rui Qiao <[email protected]>

* Better error message for missing mistral params.json (vllm-project#17132)

Signed-off-by: mgoin <[email protected]>

* Use custom address for listening socket (vllm-project#15988)

Signed-off-by: Jens Glaser <[email protected]>

* [FEAT] [ROCm]: AITER Fused MOE V1 Support (vllm-project#16752)

Signed-off-by: vllmellm <[email protected]>
Co-authored-by: tjtanaa <[email protected]>

* [Attention] FA3 decode perf improvement - single mma warp group support for head dim 128 (vllm-project#16864)

Signed-off-by: Lucas Wilkinson <[email protected]>

* fix float16 support for kimi-vl (vllm-project#17156)

Co-authored-by: zhouzaida <[email protected]>

* [Doc] V1 : Update LoRA status (vllm-project#17133)

Signed-off-by: varun sundar rabindranath <[email protected]>
Co-authored-by: varun sundar rabindranath <[email protected]>

* [Docs] Fix True->true in supported_models.md (vllm-project#17141)

* Move missed `SchedulerConfig` args into scheduler config group in `EngineArgs` (vllm-project#17131)

Signed-off-by: Harry Mellor <[email protected]>

* [Misc] Clean up redundant code in uniproc_executor.py (vllm-project#16762)

Signed-off-by: Lifu Huang <[email protected]>

* [Bugfix][Misc] Use TritonPlaceholderModule to defensively import triton (vllm-project#15099)

Signed-off-by: Mengqing Cao <[email protected]>

* [Misc] Benchmark Serving Script Support Appending Results (vllm-project#17028)

Signed-off-by: Lucas Wilkinson <[email protected]>

* [Perf]Optimize rotary_emb implementation to use Triton operator for improved inference performance (vllm-project#16457)

Signed-off-by: cynthieye <[email protected]>
Co-authored-by: MagnetoWang <[email protected]>

* [Bugfix] remove fallback in guided_json (int range, patterns) (vllm-project#16725)

Signed-off-by: csy1204 <[email protected]>
Co-authored-by: 조상연[플레이스 AI] <[email protected]>

* [Quantization][FP8] Add support for FP8 models with input_scale for output projection and QK quantization (vllm-project#15734)

Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Luka Govedič <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>

* [Doc] Add headings to improve gptqmodel.md (vllm-project#17164)

Signed-off-by: windsonsea <[email protected]>

* Only turn on FastIncrementalDetokenizer when tokenizers >= 0.21.1 (vllm-project#17158)

* [Doc] Add two links to disagg_prefill.md (vllm-project#17168)

Signed-off-by: windsonsea <[email protected]>

* [Doc] Move todo out of beam search docstring (vllm-project#17183)

Signed-off-by: Alex-Brooks <[email protected]>

* [Bugfix] Fix mistral model tests (vllm-project#17181)

Signed-off-by: DarkLight1337 <[email protected]>

* [Bugfix] Fix Mistral ChatCompletionRequest Body Exception (vllm-project#16769)

Signed-off-by: Jasmond Loh <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>

* Fix API typo and remove FP8 on V1 restriction

---------

Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: Chenyaaang <[email protected]>
Signed-off-by: Guillaume Calmettes <[email protected]>
Signed-off-by: Yang Wang <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: Aleksandr Malyshev <[email protected]>
Signed-off-by: root <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: <>
Signed-off-by: vllmellm <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: Chendi Xue <[email protected]>
Signed-off-by: reidliu41 <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: gitover22 <[email protected]>
Signed-off-by: chaunceyjiang <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: windsonsea <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: csy1204 <[email protected]>
Signed-off-by: sydarb <[email protected]>
Signed-off-by: 开哲 <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: shen-shanshan <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Aaruni Aggarwal <[email protected]>
Signed-off-by: Eyshika Agarwal <[email protected]>
Signed-off-by: eyshika <[email protected]>
Signed-off-by: Yinghai Lu <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Jens Glaser <[email protected]>
Signed-off-by: varun sundar rabindranath <[email protected]>
Signed-off-by: Lifu Huang <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: cynthieye <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Luka Govedič <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Jasmond Loh <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Chenyaaang <[email protected]>
Co-authored-by: Guillaume Calmettes <[email protected]>
Co-authored-by: Yang Wang <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: Sage Moore <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: qli88 <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: vllmellm <[email protected]>
Co-authored-by: Chauncey <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Chendi.Xue <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: Reid <[email protected]>
Co-authored-by: reidliu41 <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: huafeng <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Michael Yao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Yong Hoon Shin <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Sangyeon Cho <[email protected]>
Co-authored-by: Chen Xia <[email protected]>
Co-authored-by: Areeb Syed <[email protected]>
Co-authored-by: 张宇 <[email protected]>
Co-authored-by: 开哲 <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: wang.yuqi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Aaruni Aggarwal <[email protected]>
Co-authored-by: Atilla <[email protected]>
Co-authored-by: Eyshika Agarwal <[email protected]>
Co-authored-by: Yinghai Lu <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: jglaser <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
Co-authored-by: Zaida Zhou <[email protected]>
Co-authored-by: zhouzaida <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: varun sundar rabindranath <[email protected]>
Co-authored-by: Lifu Huang <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: yexin(叶鑫) <[email protected]>
Co-authored-by: MagnetoWang <[email protected]>
Co-authored-by: 조상연[플레이스 AI] <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Lu Fang <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Jasmond L <[email protected]>
jikunshang pushed a commit to jikunshang/vllm that referenced this pull request Apr 29, 2025
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
adobrzyn pushed a commit to HabanaAI/vllm-fork that referenced this pull request Apr 30, 2025
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
Signed-off-by: Agata Dobrzyniewicz <[email protected]>
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: tjtanaa <[email protected]>
Signed-off-by: Mu Huai <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed rocm Related to AMD ROCm v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants