Skip to content

Conversation

@sfbemerk
Copy link
Contributor

@sfbemerk sfbemerk commented Jul 9, 2025

Purpose / Overview

The chat template examples/tool_chat_template_deepseekr1.jinja for DeepSeek-R1-0528 produces prompts that differ from those rendered with the original chat template in the tokenizer_config.json, even for requests without tool use.

Our expectation is that for requests with no tools and no tool_calls in the messages history, the recommended vllm chat template should be equivalent to the original chat template in the tokenizer_config.json.

The difference in the rendered prompts is purely in whitespaces: because of the multi-line formatting of the jinja template in examples/tool_chat_template_deepseekr1.jinja and no consequent use of whitespace-stripping jinja blocks like {%- or {{-, multiple spaces and newline characters appear in the rendered prompt and result in additional whitespace tokens.
While these additional whitespace tokens in the prompt do not cause harm for many use cases, we were surprised to find out that for certain requests the response behavior was different between a vllm deployment with and without --chat-template examples/tool_chat_template_deepseekr1.jinja.

Thus, we suggest to add whitespace-control to the vllm jinja template so that the rendered prompt for requests without tool use is identical to the one obtained with the original one.
In addition, we also suggest to clean up excessive whitespace for requests with tools. In our tests, this change yielded a small but significant improvement in BFCL benchmark scores.

This PR improves on #18874. Interestingly, there had already been a comment by @wukaixingxp about excessive whitespace, but it had not been addressed in detail.

Tests

Rendered prompt comparison

(1) for requests without tools

a) an example message rendered with the original R1-0528 chat template from its tokenizer_config.json (see https://nebula.packetcoders.io/j2-render/s_4c918b3d/):

<|begin_of_text|>You are a helpful assistant.<|User|>Who are you?<|Assistant|>I am an AI assistant<|end▁of▁sentence|><|User|>What is the weather in Munich?<|Assistant|>

b) an example message rendered with the current tool_chat_template_deepseekr1.jinja (see https://nebula.packetcoders.io/j2-render/s_8354d180/)


<|begin_of_text|>
You are a helpful assistant.<|User|>Who are you?<|Assistant|>            I am an AI assistant<|end▁of▁sentence|><|User|>What is the weather in Munich?<|Assistant|>

c) an example message rendered with the suggested corrected tool_chat_template_deepseekr1.jinja from this PR (see https://nebula.packetcoders.io/j2-render/s_2f2ff276/)

<|begin_of_text|>You are a helpful assistant.<|User|>Who are you?<|Assistant|>I am an AI assistant<|end▁of▁sentence|><|User|>What is the weather in Munich?<|Assistant|>

(2) for requests with tools

a) an example message rendered with the current tool_chat_template_deepseekr1.jinja (see https://nebula.packetcoders.io/j2-render/s_fe7b8fc8/)


<|begin_of_text|>
you are helpful assistant.

You are a helpful assistant with tool calling capabilities. When a tool call is needed, you MUST use the following format to issue the call:
<|tool▁calls▁begin|><|tool▁call▁begin|>function<|tool▁sep|>FUNCTION_NAME
```json
{"param1": "value1", "param2": "value2"}
``<|tool▁call▁end|><|tool▁calls▁end|>

Make sure the JSON is valid.## Tools

### Function

You have the following functions available:


```json
{"function": {"description": "Get the current weather", "name": "get_current_weather", "parameters": {"properties": {"format": {"enum": ["celsius", "fahrenheit"], "type": "string"}, "location": {"description": "The city and country, e.g. San Francisco, USA", "type": "string"}}, "required": ["location", "format"], "type": "object"}}, "type": "function"}
``
<|User|>Who are you?<|Assistant|>            I am AI assistant<|end▁of▁sentence|><|User|>What is the weather in SF and Seattle?<|Assistant|>                    <|tool▁calls▁begin|><|tool▁call▁begin|>function<|tool▁sep|>get_weather
```json
"{\"city\": \"SF\", \"metric\": \"fahrenheit\"}"
``<|tool▁call▁end|>                
<|tool▁call▁begin|>function<|tool▁sep|>get_weather
```json
"{\"city\": \"Seattle\", \"metric\": \"fahrenheit\"}"
``<|tool▁call▁end|>        <|tool▁calls▁end|><|end▁of▁sentence|>            <|tool▁outputs▁begin|><|tool▁output▁begin|>[{"response": "Sunny"},{"response": "Rainy"}]<|tool▁output▁end|>            <|tool▁outputs▁end|>SF is Sunny and Seattle is Rainy<|end▁of▁sentence|><|User|>what about NYC?<|Assistant|>

b) an example message rendered with the suggested corrected tool_chat_template_deepseekr1.jinja from this PR (see https://nebula.packetcoders.io/j2-render/s_59846767/)

<|begin_of_text|>you are helpful assistant.

You are a helpful assistant with tool calling capabilities. When a tool call is needed, you MUST use the following format to issue the call:
<|tool▁calls▁begin|><|tool▁call▁begin|>function<|tool▁sep|>FUNCTION_NAME
```json
{"param1": "value1", "param2": "value2"}
``<|tool▁call▁end|><|tool▁calls▁end|>

Make sure the JSON is valid.## Tools

### Function

You have the following functions available:


```json
{"function": {"description": "Get the current weather", "name": "get_current_weather", "parameters": {"properties": {"format": {"enum": ["celsius", "fahrenheit"], "type": "string"}, "location": {"description": "The city and country, e.g. San Francisco, USA", "type": "string"}}, "required": ["location", "format"], "type": "object"}}, "type": "function"}
``
<|User|>Who are you?<|Assistant|>I am AI assistant<|end▁of▁sentence|><|User|>What is the weather in SF and Seattle?<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>function<|tool▁sep|>get_weather
```json
"{\"city\": \"SF\", \"metric\": \"fahrenheit\"}"
``<|tool▁call▁end|>
<|tool▁call▁begin|>function<|tool▁sep|>get_weather
```json
"{\"city\": \"Seattle\", \"metric\": \"fahrenheit\"}"
``<|tool▁call▁end|><|tool▁calls▁end|><|end▁of▁sentence|><|tool▁outputs▁begin|><|tool▁output▁begin|>[{"response": "Sunny"},{"response": "Rainy"}]<|tool▁output▁end|><|tool▁outputs▁end|>SF is Sunny and Seattle is Rainy<|end▁of▁sentence|><|User|>what about NYC?<|Assistant|>

Benchmark results

With the benchmark from the Berkeley Function Calling Leaderboard we measured and evaluated the tool-calling capabilities with the old and the corrected tool_chat_template_deepseekr1.jinja, in 6 runs each, at a temperature of 0.6.

chat template BFCL "simple" BFCL "multiple" BFCL "parallel" BFCL "parallel_multiple"
old template with excess whitespace 0.884(5) 0.78(1) 0.28(3) 0.36(3)
new template with stripped whitespace 0.93(1) 0.95(1) 0.37(2) 0.39(2)

That means, this PR could yield small but significant improvements in tool calling as measured in these benchmarks at T=0.6.

Benjamin Merkel added 2 commits July 9, 2025 23:20
For requests without tools, the rendered prompt should be identical for `examples/tool_chat_template_deepseekr1.jinja` as for the original template from tokenizer_config.json.

Signed-off-by: Benjamin Merkel <[email protected]>
For requests with tools, the rendered prompt may strip all whitespace between special tool call and tool output tokens.

Signed-off-by: Benjamin Merkel <[email protected]>
@github-actions
Copy link

github-actions bot commented Jul 9, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @sfbemerk, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue with the DeepSeek-R1 chat template (tool_chat_template_deepseekr1.jinja) where it was generating prompts with excessive whitespace. By introducing Jinja's whitespace control, the PR ensures that prompts rendered by this template are identical to the original tokenizer's output for non-tool use cases and significantly cleans up whitespace for tool-enabled requests. This consistency not only resolves unexpected behavior but also leads to a measurable improvement in tool-calling performance.

Highlights

  • Template Correction: Fixed the tool_chat_template_deepseekr1.jinja to prevent the generation of extraneous whitespace in rendered prompts for the DeepSeek-R1 model.
  • Prompt Consistency: Ensured that prompts generated for non-tool use cases are now identical to those produced by the model's original tokenizer_config.json template, resolving discrepancies in prompt rendering.
  • Performance Improvement: Achieved a small but significant improvement in tool-calling benchmark scores (from 0.884 to 0.93) by cleaning up whitespace in tool-related prompts, leading to better model behavior.
  • Jinja Whitespace Control: Implemented Jinja's whitespace stripping (- in {{- and {%-) across the template to precisely control output formatting and eliminate unwanted newlines and spaces.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added documentation Improvements or additions to documentation deepseek Related to DeepSeek models tool-calling labels Jul 9, 2025
@sfbemerk sfbemerk changed the title Fix DeepSeek-R1 chat template Fix DeepSeek-R1-0528 chat template Jul 9, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves the deepseekr1 chat template by adding whitespace control, which ensures consistent formatting and improves model behavior. The use of {{- and {%- is applied correctly throughout the template, enhancing the overall quality and predictability of the output.

@sfbemerk
Copy link
Contributor Author

I added more benchmark categories from Berkeley Function Calling Leaderboard (measured at T=0.6). No degradation occurs - most of them actually show a significant improvement.

chat template BFCL "simple" BFCL "multiple" BFCL "parallel" BFCL "parallel_multiple"
old template with excess whitespace 0.884(5) 0.78(1) 0.28(3) 0.36(3)
new template with stripped whitespace 0.93(1) 0.95(1) 0.37(2) 0.39(2)

@github-project-automation github-project-automation bot moved this from Backlog to In progress in DeepSeek V3/R1 Jul 10, 2025
@simon-mo simon-mo enabled auto-merge (squash) July 10, 2025 15:57
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 10, 2025
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems reasonable to me, thanks for sharing the clear eval improvement

@simon-mo simon-mo merged commit 2515953 into vllm-project:main Jul 10, 2025
56 checks passed
@github-project-automation github-project-automation bot moved this from In progress to Done in DeepSeek V3/R1 Jul 10, 2025
@hwaking
Copy link

hwaking commented Jul 18, 2025

should deepseek v3 also need to upate? [examples/tool_chat_template_deepseekv3.jinja]

Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
Signed-off-by: Benjamin Merkel <[email protected]>
Co-authored-by: Benjamin Merkel <[email protected]>
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
Signed-off-by: Benjamin Merkel <[email protected]>
Co-authored-by: Benjamin Merkel <[email protected]>
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
Signed-off-by: Benjamin Merkel <[email protected]>
Co-authored-by: Benjamin Merkel <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
Signed-off-by: Benjamin Merkel <[email protected]>
Co-authored-by: Benjamin Merkel <[email protected]>
Signed-off-by: Paul Pak <[email protected]>
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
Signed-off-by: Benjamin Merkel <[email protected]>
Co-authored-by: Benjamin Merkel <[email protected]>
Signed-off-by: Diego-Castan <[email protected]>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 27, 2025
Signed-off-by: Benjamin Merkel <[email protected]>
Co-authored-by: Benjamin Merkel <[email protected]>
huiqiwa pushed a commit to huiqiwa/vllm-fork that referenced this pull request Oct 21, 2025
Signed-off-by: Benjamin Merkel <[email protected]>
Co-authored-by: Benjamin Merkel <[email protected]>
huiqiwa pushed a commit to huiqiwa/vllm-fork that referenced this pull request Oct 22, 2025
Signed-off-by: Benjamin Merkel <[email protected]>
Co-authored-by: Benjamin Merkel <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

deepseek Related to DeepSeek models documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed tool-calling

Projects

Status: Done
Status: Done

Development

Successfully merging this pull request may close these issues.

4 participants