Skip to content

Conversation

@LucasWilkinson
Copy link
Collaborator

@LucasWilkinson LucasWilkinson commented Mar 20, 2025

Now that FA3 Fp8 support has landed (#14570) we can enable Fp8 KV-caches for Hopper devices in V1

Update the oracle to allow this plus basic refactors

Test script

# SPDX-License-Identifier: Apache-2.0

from vllm import LLM, SamplingParams

# Sample prompts.
prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)

if __name__ == '__main__':
    # Create an LLM.
    llm = LLM(model="meta-llama/Llama-3.2-1B-Instruct", kv_cache_dtype="fp8")
    # Generate texts from the prompts. The output is a list of RequestOutput objects
    # that contain the prompt, generated text, and other information.
    outputs = llm.generate(prompts, sampling_params)
    # Print the outputs.
    for output in outputs:
        prompt = output.prompt
        generated_text = output.outputs[0].text
        print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

Results:

(vllm) lwilkinson@beaker:~/code/vllm$ python examples/offline_inference/basic/basic.py 
....
Prompt: 'Hello, my name is', Generated text: " Rachel and I'm a software developer. I'm new to the tech industry and"
Prompt: 'The president of the United States is', Generated text: ' the head of state and government of the United States. The president is elected by'
Prompt: 'The capital of France is', Generated text: ' Paris. You can visit the Eiffel Tower, the Louvre, and'
Prompt: 'The future of AI is', Generated text: ' a complex and multifaceted topic that spans various fields, including philosophy, ethics'
[rank0]:[W320 06:15:16.475614640 ProcessGroupNCCL.cpp:1496] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())


(vllm) lwilkinson@beaker:~/code/vllm$ VLLM_USE_V1=0 python examples/offline_inference/basic/basic.py 
....
Prompt: 'Hello, my name is', Generated text: " Alex. I've been spending a lot of time online lately, and I'm"
Prompt: 'The president of the United States is', Generated text: ' elected by the people through the Electoral College system. This system was established by the'
Prompt: 'The capital of France is', Generated text: ' Paris.\nThe capital of Germany is Berlin.\nThe capital of the United States is'
Prompt: 'The future of AI is', Generated text: " here, and it's making a huge impact on our lives. From intelligent machines"
[rank0]:[W320 06:18:31.628198327 ProcessGroupNCCL.cpp:1496] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Mar 20, 2025
Copy link
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.

@mergify
Copy link

mergify bot commented Mar 21, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @LucasWilkinson.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Mar 21, 2025
@WoosukKwon
Copy link
Collaborator

@LucasWilkinson This is amazing! Could you please provide accuracy benchmark?

@LucasWilkinson
Copy link
Collaborator Author

@LucasWilkinson This is amazing! Could you please provide accuracy benchmark?

lm_eval --model vllm --model_args pretrained=meta-llama/Llama-3.3-70B-Instruct,tensor_parallel_size=2,dtype=auto,gpu_memory_utilization=0.9,trust_remote_code=True,max_model_len=16384 --task gsm8k --num_fewshot=5 --limit 200
....
INFO 03-22 02:59:18 [kv_cache_utils.py:537] GPU KV cache size: 347,136 tokens
INFO 03-22 02:59:18 [kv_cache_utils.py:540] Maximum concurrency for 16,384 tokens per request: 21.19x
INFO 03-22 02:59:18 [kv_cache_utils.py:537] GPU KV cache size: 347,136 tokens
INFO 03-22 02:59:18 [kv_cache_utils.py:540] Maximum concurrency for 16,384 tokens per request: 21.19x
....
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value|   |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.935|±  |0.0175|
|     |       |strict-match    |     5|exact_match|↑  |0.900|±  |0.0213|

fp8

lm_eval --model vllm --model_args pretrained=meta-llama/Llama-3.3-70B-Instruct,tensor_parallel_size=2,dtype=auto,gpu_memory_utilization=0.9,trust_remote_code=True,max_model_len=16384,kv_cache_dtype=fp8 --task gsm8k --num_fewshot=5 --limit 200
....
INFO 03-22 03:13:26 [kv_cache_utils.py:537] GPU KV cache size: 694,272 tokens
INFO 03-22 03:13:26 [kv_cache_utils.py:540] Maximum concurrency for 16,384 tokens per request: 42.38x
INFO 03-22 03:13:26 [kv_cache_utils.py:537] GPU KV cache size: 694,272 tokens
INFO 03-22 03:13:26 [kv_cache_utils.py:540] Maximum concurrency for 16,384 tokens per request: 42.38x
....
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value|   |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.930|±  |0.0181|
|     |       |strict-match    |     5|exact_match|↑  |0.895|±  |0.0217|

Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
@LucasWilkinson LucasWilkinson force-pushed the lwilkinson/enable-fp8-on-v1 branch from d1f8fa7 to b318d53 Compare March 22, 2025 03:55
@mergify mergify bot removed the needs-rebase label Mar 22, 2025
Signed-off-by: Lucas Wilkinson <[email protected]>
@LucasWilkinson LucasWilkinson added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 23, 2025
@simon-mo simon-mo merged commit dccf535 into vllm-project:main Mar 23, 2025
47 checks passed
@renjie0
Copy link

renjie0 commented Mar 25, 2025

What is the expected perf difference

erictang000 pushed a commit to erictang000/vllm that referenced this pull request Mar 25, 2025
wrmedford pushed a commit to wrmedford/vllm that referenced this pull request Mar 26, 2025
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Wes Medford <[email protected]>
lulmer pushed a commit to lulmer/vllm that referenced this pull request Apr 7, 2025
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]>
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
shreyankg pushed a commit to shreyankg/vllm that referenced this pull request May 3, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Mu Huai <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants