Skip to content

Conversation

@cyang49
Copy link
Contributor

@cyang49 cyang49 commented Apr 16, 2025

This is a small patch to Mamba2 seq_idx metadata computation for prefill path. The original implementation uses a sequential for loop on CPU and the number of cpu-gpu synchronizations can increase with the batch size. With the patch, it avoids using the sequential for loop on host, and makes the code easier to read too. Verified no performance regression and observed a small 1.7% throughput increase with benchmark_serving using ShareGPTV3.

python benchmarks/benchmark_serving.py --model ibm-ai-platform/Bamba-9B \
--dataset-name sharegpt --dataset-path ShareGPT_V3/ShareGPT_V3_unfiltered_cleaned_split.json \
--ignore-eos --port 9999

This PR

============ Serving Benchmark Result ============
Successful requests:                     1000      
Benchmark duration (s):                  183.30    
Total input tokens:                      215201    
Total generated tokens:                  198343    
Request throughput (req/s):              5.46      
Output token throughput (tok/s):         1082.07   
Total Token throughput (tok/s):          2256.11   
---------------Time to First Token----------------
Mean TTFT (ms):                          68477.18  
Median TTFT (ms):                        61241.18  
P99 TTFT (ms):                           169190.08 
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          250.29    
Median TPOT (ms):                        262.61    
P99 TPOT (ms):                           387.52    
---------------Inter-token Latency----------------
Mean ITL (ms):                           216.47    
Median ITL (ms):                         332.63    
P99 ITL (ms):                            421.78    
==================================================

Main

============ Serving Benchmark Result ============
Successful requests:                     1000      
Benchmark duration (s):                  186.57    
Total input tokens:                      215201    
Total generated tokens:                  198343    
Request throughput (req/s):              5.36      
Output token throughput (tok/s):         1063.08   
Total Token throughput (tok/s):          2216.51   
---------------Time to First Token----------------
Mean TTFT (ms):                          69839.75  
Median TTFT (ms):                        62396.53  
P99 TTFT (ms):                           172499.14 
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          255.39    
Median TPOT (ms):                        268.28    
P99 TPOT (ms):                           395.79    
---------------Inter-token Latency----------------
Mean ITL (ms):                           220.56    
Median ITL (ms):                         332.56    
P99 ITL (ms):                            429.83    
==================================================

lm_eval confirmed no change on quality

lm_eval --model vllm     --model_args pretrained=ibm-ai-platform/Bamba-9B,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9 --batch_size auto --trust_remote_code  --cache_requests true --tasks gsm8k
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.2487|±  |0.0119|
|     |       |strict-match    |     5|exact_match|↑  |0.3563|±  |0.0132|

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@cyang49
Copy link
Contributor Author

cyang49 commented Apr 17, 2025

Attempting to use sliced query_start_loc for cu_seqlens in order to remove unnecessary torch.diff() resulted in a slight slowdown. I'm not sure what's causing this.

@cyang49
Copy link
Contributor Author

cyang49 commented Apr 22, 2025

Closing
Changes added to PR #16942

@cyang49 cyang49 closed this Apr 22, 2025
@cyang49 cyang49 deleted the pr_mamba2_simplify_seq_idx branch April 22, 2025 13:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant