-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[None][refactor] Move draft token padding out of Drafter #7134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughDraft-token padding was moved from ModelDrafter and NGram components into PyExecutor: drafting now yields variable-length (including empty) draft token lists, and PyExecutor pads each scheduled request’s Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant PyExecutor
participant ModelDrafter
participant NGramPool
Client->>PyExecutor: schedule batch
PyExecutor->>ModelDrafter: prepare_draft_tokens(requests)
ModelDrafter->>NGramPool: get_draft_tokens(prefix, request_id, max_seq_len)
NGramPool-->>ModelDrafter: draft_tokens (variable length / maybe empty)
ModelDrafter-->>PyExecutor: requests with py_draft_tokens
rect rgb(235,245,255)
note right of PyExecutor: New — per-request padding to fixed max length for CUDA-graph compatibility
PyExecutor->>PyExecutor: for each req: extend req.py_draft_tokens to max_draft_len using get_draft_token_length(req)
end
PyExecutor->>PyExecutor: continue execution with padded drafts
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
/bot run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (6)
tensorrt_llm/_torch/speculative/model_drafter.py (1)
1-1
: Missing NVIDIA copyright headerPer repo guidelines, prepend the current-year NVIDIA copyright header to all source files.
tensorrt_llm/_torch/speculative/ngram.py (3)
1-1
: Missing NVIDIA copyright headerAdd the standard NVIDIA copyright header at the top of the file.
86-96
: Returning an empty list when no room is available is consistent with new padding flowSwitching the fallback from
[padding_id]
to[]
is correct now that PyExecutor pads. Please document this behavior in the method to avoid surprises for future maintainers.Apply this diff to add a short docstring clarifying the new semantics:
def get_draft_tokens( self, prefix: list[int], request_id: int, max_sequence_length: int, ): + """ + Return candidate draft tokens for the given prefix. + - May return fewer than `self.max_draft_len` tokens. + - Returns [] when no draft tokens fit this step (padding is handled upstream). + """ prefix_len = len(prefix) max_draft_token_length_this_step = max_sequence_length - 1 - prefix_len if max_draft_token_length_this_step <= 0: # No draft token is need if the prefix is long enough return []
128-136
: Minor robustness: ensure length never exceeds max_draft_len (defensive clamp)The pool construction limits matches to
self.max_draft_len
, but adding an explicit clamp keeps the guarantee local and future-proof.Apply this diff:
- draft_tokens = pool[pattern][0 if self.is_use_oldest else -1] - draft_tokens = list(draft_tokens)[:max_draft_token_length_this_step] + draft_tokens = pool[pattern][0 if self.is_use_oldest else -1] + # Defensive clamp to pool limit and per-step budget + draft_tokens = list(draft_tokens)[:min(self.max_draft_len, max_draft_token_length_this_step)]tensorrt_llm/_torch/pyexecutor/py_executor.py (2)
1-1
: Missing NVIDIA copyright headerPlease add the standard NVIDIA copyright header.
1004-1011
: Centralized padding: solid relocation; add pad_id and clamp for defensive correctnessUnconditional padding here is aligned with current kernel assumptions. Two small improvements:
- Use a configurable pad_id when available (fallback to 0).
- Clamp oversize (defensive) and assert final length in debug.
This keeps invariants explicit and avoids relying on distant code to enforce
<= max_draft_len
.Apply this diff:
- # Pad draft tokens to the max draft length. This is for CUDA - # graph compatibility. - for req in scheduled_batch.generation_requests: - max_draft_tokens = self.max_draft_len - num_draft_tokens = get_draft_token_length(req) - req.py_draft_tokens.extend( - 0 for _ in range(max_draft_tokens - - num_draft_tokens)) + # Pad draft tokens to the max draft length for CUDA-graph + # and attention-kernel assumptions. + for req in scheduled_batch.generation_requests: + max_draft_tokens = self.max_draft_len + num_draft_tokens = get_draft_token_length(req) + + # Clamp oversize defensively (drafters should respect the limit). + if num_draft_tokens > max_draft_tokens: + req.py_draft_tokens = req.py_draft_tokens[:max_draft_tokens] + num_draft_tokens = max_draft_tokens + + pad_id = getattr(self.model_engine.spec_config, "pad_id", 0) + needed = max_draft_tokens - num_draft_tokens + if needed > 0: + # list-multiply is faster than extending a generator + req.py_draft_tokens.extend([pad_id] * needed) + # Optional debug assertion to catch shape mismatches early + if __debug__: + assert len(req.py_draft_tokens) == max_draft_tokens, ( + f"draft tokens length {len(req.py_draft_tokens)} != " + f"max_draft_len {max_draft_tokens} for req {req.py_request_id}" + )
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
tensorrt_llm/_torch/pyexecutor/py_executor.py
(2 hunks)tensorrt_llm/_torch/speculative/model_drafter.py
(1 hunks)tensorrt_llm/_torch/speculative/ngram.py
(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py
: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else
Files:
tensorrt_llm/_torch/speculative/ngram.py
tensorrt_llm/_torch/pyexecutor/py_executor.py
tensorrt_llm/_torch/speculative/model_drafter.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend NVIDIA copyright header (current year) to all source files
Files:
tensorrt_llm/_torch/speculative/ngram.py
tensorrt_llm/_torch/pyexecutor/py_executor.py
tensorrt_llm/_torch/speculative/model_drafter.py
🔇 Additional comments (6)
tensorrt_llm/_torch/speculative/model_drafter.py (1)
12-12
: Import cleanup aligns with padding relocationDropping
get_draft_token_length
here is correct now that padding occurs in PyExecutor. No further action needed.tensorrt_llm/_torch/speculative/ngram.py (2)
176-199
: Call-site reflects new signature and no longer pads — good
NGramDrafter.prepare_draft_tokens
correctly adapts to the newget_draft_tokens
signature and leaves padding to the executor.
86-96
: No unsafe indexing on empty draft listsI’ve verified that returning an empty list from
get_draft_tokens
cannot trigger an out-of-bounds access:
In
tensorrt_llm/_torch/pyexecutor/sampler.py
(lines 502–510), the code only does
new_token = request.py_draft_tokens[i]
inside
for i in range(num_accepted):
and whenget_draft_token_length(request)
is 0 (i.e. no draft tokens), the loop never executes.In the example
examples/ngram/run_dtm_ngram.py
(lines 292–294), eachd_ids[bi] = …
assignment is guarded by
if l >= r: continue
so empty spans (l >= r
) are skipped before any indexing.The greedy decoding path sets
req.py_draft_tokens = None
whenmax_draft_len <= 0
(inpyexecutor/py_executor.py
), but that also results inreq.py_draft_logits is None
, causing the code to take the greedy branch instead of the rejection-sampling branch where indexing occurs.No downstream consumer assumes
py_draft_tokens
is non-empty, so no further changes are needed.tensorrt_llm/_torch/pyexecutor/py_executor.py (3)
44-44
: Importingget_draft_token_length
here is the right placeThis keeps length normalization centralized in the executor.
959-967
: Drafting supported only in_executor_loop
: constraint remains intactGuardrails against overlap/PP modes when using drafter are preserved; padding runs in the only path where drafting is allowed. Good.
Also applies to: 995-1003
1004-1011
: I’m running targeted inspections now; I’ll follow up once we’ve confirmed the function behavior andpy_draft_tokens
initialization invariants.
PR_Github #16073 [ run ] triggered by Bot |
b961a33
to
82d14cd
Compare
/bot run |
PR_Github #16073 [ run ] completed with state |
PR_Github #16075 [ run ] triggered by Bot |
PR_Github #16075 [ run ] completed with state |
Signed-off-by: Mike Iovine <[email protected]>
82d14cd
to
2cecc9f
Compare
/bot run |
PR_Github #16195 [ run ] triggered by Bot |
PR_Github #16195 [ run ] completed with state |
/bot run |
PR_Github #16451 [ run ] triggered by Bot |
PR_Github #16451 [ run ] completed with state |
/bot run |
PR_Github #16468 [ run ] triggered by Bot |
PR_Github #16468 [ run ] completed with state |
/bot run |
PR_Github #16567 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py
(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py
: Code must target Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Preserve module namespaces when importing; import modules/packages and access members via the module (e.g., from package.subpackage import foo; foo.SomeClass())
Python file names should be snake_case
Python class names should be PascalCase
Python functions/methods and local variables should be snake_case; variables beginning with a number should be prefixed with k_ (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE prefixed with G_ (e.g., G_MY_GLOBAL); constants should be UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments should be reserved for in-function or file-local interfaces
Use Google-style docstrings for classes and functions; attributes and variables may be documented inline with trailing string literals
Avoid reflection when simpler, explicit code suffices (e.g., avoid dict(**locals()) patterns)
In try/except, catch the narrowest exceptions possible
For duck-typing patterns, keep the try body minimal and move logic to else to avoid masking unrelated failures
Files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
**/*.{c,cc,cpp,cxx,h,hh,hpp,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)
Files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)
tensorrt_llm/_torch/pyexecutor/llm_request.py (1)
get_draft_token_length
(558-569)tensorrt_llm/runtime/generation.py (1)
max_draft_tokens
(1283-1286)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
45-45
: Import of get_draft_token_length looks good and consistent with the new ownership.Importing the helper here aligns with moving padding responsibility into PyExecutor.
PR_Github #16567 [ run ] completed with state |
Description
We currently require
Drafter
implementations to pad requests to the max draft length for CUDA graph compatibility. This is fine, but it makes implementing user-provided drafters annoying. The fact that padding needs to occur is an implementation detail, users should not have to worry about it. Fix this by moving the padding logic to PyExecutor.In theory, padding only needs to occur when CUDA graphs are on. However, we have to keep padding on for all cases right now - there's a bunch of code that assumes that each request has the same number of draft tokens (most notably the attention kernels). Since the vast majority of users will want CUDA graphs anyways, I don't think this is a big deal
Test Coverage
Existing tests.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]
Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id
(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test
(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"
(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log
(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug
(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list
parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md
and the
scripts/test_to_stage_mapping.py
helper.kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.
Summary by CodeRabbit