Skip to content

Conversation

@yeqcharlotte
Copy link
Owner

@yeqcharlotte yeqcharlotte commented Jun 16, 2025

Purpose

Ran into following errors when starting EP on Maverick:

(VllmWorker rank=3 pid=1737537) ERROR 06-15 22:58:28 [multiproc_executor.py:527]     run_cutlass_moe_fp8(output, hidden_states, w1, w2, topk_ids,
(VllmWorker rank=3 pid=1737537) ERROR 06-15 22:58:28 [multiproc_executor.py:527]   File "/home/yeq/gitrepos/vllm/vllm/model_executor/layers/fused_moe/cutlass_moe.py", line 89, in run_cutlass_moe_fp8
(VllmWorker rank=3 pid=1737537) ERROR 06-15 22:58:28 [multiproc_executor.py:527]     local_topk_ids = torch.where(expert_map[topk_ids] != -1,
(VllmWorker rank=3 pid=1737537) ERROR 06-15 22:58:28 [multiproc_executor.py:527]                                  ~~~~~~~~~~^^^^^^^^^^
(VllmWorker rank=3 pid=1737537) ERROR 06-15 22:58:28 [multiproc_executor.py:527] IndexError: tensors used as indices must be long, int, byte or bool tensors

In the PPLX implementation vllm-project#18762, the dtype got flipped to uint32. While in custom_routing_function, this is already flipped to int32: https://github.com/vllm-project/vllm/blob/a77aea59fd2f0300160dee6fff2e359f572d7f57/vllm/model_executor/models/llama4.py#L57.

Actually not sure this is a good idea. Seem to have perf issues.

Test Plan

vllm serve meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 \
        --max_model_len 8192 \
        --kv_cache_dtype fp8 \
        --enable-expert-parallel \
        --tensor-parallel-size 8 \
        --trust-remote-code \
        --enforce_eager \
        --gpu-memory-utilization 0.8 \
        --disable-log-requests 2>&1 | tee .env/ep_`date +%Y%m%d_%H%M%S`.log

Test Result

(Optional) Documentation Update

Signed-off-by: Ye (Charlotte) Qi <[email protected]>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants