-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
Fix DeepSeek-R1-0528 chat template #20717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix DeepSeek-R1-0528 chat template #20717
Conversation
For requests without tools, the rendered prompt should be identical for `examples/tool_chat_template_deepseekr1.jinja` as for the original template from tokenizer_config.json. Signed-off-by: Benjamin Merkel <[email protected]>
For requests with tools, the rendered prompt may strip all whitespace between special tool call and tool output tokens. Signed-off-by: Benjamin Merkel <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @sfbemerk, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses an issue with the DeepSeek-R1 chat template (tool_chat_template_deepseekr1.jinja) where it was generating prompts with excessive whitespace. By introducing Jinja's whitespace control, the PR ensures that prompts rendered by this template are identical to the original tokenizer's output for non-tool use cases and significantly cleans up whitespace for tool-enabled requests. This consistency not only resolves unexpected behavior but also leads to a measurable improvement in tool-calling performance.
Highlights
- Template Correction: Fixed the
tool_chat_template_deepseekr1.jinjato prevent the generation of extraneous whitespace in rendered prompts for the DeepSeek-R1 model. - Prompt Consistency: Ensured that prompts generated for non-tool use cases are now identical to those produced by the model's original
tokenizer_config.jsontemplate, resolving discrepancies in prompt rendering. - Performance Improvement: Achieved a small but significant improvement in tool-calling benchmark scores (from 0.884 to 0.93) by cleaning up whitespace in tool-related prompts, leading to better model behavior.
- Jinja Whitespace Control: Implemented Jinja's whitespace stripping (
-in{{-and{%-) across the template to precisely control output formatting and eliminate unwanted newlines and spaces.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request improves the deepseekr1 chat template by adding whitespace control, which ensures consistent formatting and improves model behavior. The use of {{- and {%- is applied correctly throughout the template, enhancing the overall quality and predictability of the output.
|
I added more benchmark categories from Berkeley Function Calling Leaderboard (measured at T=0.6). No degradation occurs - most of them actually show a significant improvement.
|
mgoin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems reasonable to me, thanks for sharing the clear eval improvement
|
should deepseek v3 also need to upate? [examples/tool_chat_template_deepseekv3.jinja] |
Signed-off-by: Benjamin Merkel <[email protected]> Co-authored-by: Benjamin Merkel <[email protected]>
Signed-off-by: Benjamin Merkel <[email protected]> Co-authored-by: Benjamin Merkel <[email protected]>
Signed-off-by: Benjamin Merkel <[email protected]> Co-authored-by: Benjamin Merkel <[email protected]> Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Benjamin Merkel <[email protected]> Co-authored-by: Benjamin Merkel <[email protected]> Signed-off-by: Paul Pak <[email protected]>
Signed-off-by: Benjamin Merkel <[email protected]> Co-authored-by: Benjamin Merkel <[email protected]> Signed-off-by: Diego-Castan <[email protected]>
Signed-off-by: Benjamin Merkel <[email protected]> Co-authored-by: Benjamin Merkel <[email protected]>
Signed-off-by: Benjamin Merkel <[email protected]> Co-authored-by: Benjamin Merkel <[email protected]>
Signed-off-by: Benjamin Merkel <[email protected]> Co-authored-by: Benjamin Merkel <[email protected]>
Purpose / Overview
The chat template
examples/tool_chat_template_deepseekr1.jinjafor DeepSeek-R1-0528 produces prompts that differ from those rendered with the original chat template in the tokenizer_config.json, even for requests without tool use.Our expectation is that for requests with no tools and no tool_calls in the messages history, the recommended vllm chat template should be equivalent to the original chat template in the tokenizer_config.json.
The difference in the rendered prompts is purely in whitespaces: because of the multi-line formatting of the jinja template in
examples/tool_chat_template_deepseekr1.jinjaand no consequent use of whitespace-stripping jinja blocks like{%-or{{-, multiple spaces and newline characters appear in the rendered prompt and result in additional whitespace tokens.While these additional whitespace tokens in the prompt do not cause harm for many use cases, we were surprised to find out that for certain requests the response behavior was different between a vllm deployment with and without
--chat-template examples/tool_chat_template_deepseekr1.jinja.Thus, we suggest to add whitespace-control to the vllm jinja template so that the rendered prompt for requests without tool use is identical to the one obtained with the original one.
In addition, we also suggest to clean up excessive whitespace for requests with tools. In our tests, this change yielded a small but significant improvement in BFCL benchmark scores.
This PR improves on #18874. Interestingly, there had already been a comment by @wukaixingxp about excessive whitespace, but it had not been addressed in detail.
Tests
Rendered prompt comparison
(1) for requests without tools
a) an example message rendered with the original R1-0528 chat template from its tokenizer_config.json (see https://nebula.packetcoders.io/j2-render/s_4c918b3d/):
b) an example message rendered with the current tool_chat_template_deepseekr1.jinja (see https://nebula.packetcoders.io/j2-render/s_8354d180/)
c) an example message rendered with the suggested corrected tool_chat_template_deepseekr1.jinja from this PR (see https://nebula.packetcoders.io/j2-render/s_2f2ff276/)
(2) for requests with tools
a) an example message rendered with the current tool_chat_template_deepseekr1.jinja (see https://nebula.packetcoders.io/j2-render/s_fe7b8fc8/)
b) an example message rendered with the suggested corrected tool_chat_template_deepseekr1.jinja from this PR (see https://nebula.packetcoders.io/j2-render/s_59846767/)
Benchmark results
With the benchmark from the Berkeley Function Calling Leaderboard we measured and evaluated the tool-calling capabilities with the old and the corrected tool_chat_template_deepseekr1.jinja, in 6 runs each, at a temperature of 0.6.
That means, this PR could yield small but significant improvements in tool calling as measured in these benchmarks at T=0.6.