Skip to content

Conversation

@tongyuantongyu
Copy link
Member

@tongyuantongyu tongyuantongyu commented Aug 26, 2025

Summary by CodeRabbit

  • New Features

    • KV cache configuration now supports a token-based limit (max_tokens) replacing the previous GPU memory-fraction setting.
  • Tests

    • Added early worker exit on CUDA errors to speed feedback in parallel test runs.
    • Improved Linux test cleanup with explicit reporting of detected inductor worker processes.
    • Replaced some test prints with warnings for missing parallel configuration entries.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 26, 2025

📝 Walkthrough

Walkthrough

Replace/extend KvCacheConfig usage across unit tests to include a new max_tokens parameter, add an autouse pytest fixture that early-exits xdist workers on CUDA errors, improve Linux cleanup reporting to summarize PyTorch inductor worker PIDs, and change a test print to use warnings.warn.

Changes

Cohort / File(s) Summary
KvCacheConfig token-based updates
tests/unittest/_torch/speculative/test_draft_target.py, tests/unittest/_torch/speculative/test_dynamic_spec_decode.py, tests/unittest/_torch/speculative/test_eagle3.py, tests/unittest/_torch/speculative/test_kv_cache_reuse.py, tests/unittest/_torch/speculative/test_ngram.py, tests/unittest/_torch/speculative/test_user_provided.py
Update KvCacheConfig construction to include max_tokens=8192 (replacing free_gpu_memory_fraction or the missing arg), reflecting a constructor signature change to accept max_tokens.
PyTest xdist CUDA early quit fixture
tests/unittest/conftest.py
Add @pytest.fixture(autouse=True) cuda_error_early_quit that runs in xdist workers: calls torch.cuda.synchronize() after tests, prints traceback on RuntimeError containing "CUDA error:", and calls os._exit(1) to force worker restart.
Unittest warnings change
tests/integration/defs/test_unittests.py
Import warnings and replace an else-branch print with warnings.warn to notify about missing parallel config and fallback to serial.
TRT test cleanup reporting
tests/integration/defs/trt_test_alternative.py
Linux-only: detect PyTorch inductor worker processes by cmdline during cleanup, collect their PIDs into torch_inductors, and append a summary line listing them; no change to kill behavior.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant PyTest_Master as PyTest Master
  participant Worker as xdist Worker
  participant Test as Test Function
  participant CUDA as torch.cuda

  PyTest_Master->>Worker: dispatch test
  rect rgba(230,240,255,0.6)
    note right of Worker: autouse fixture: cuda_error_early_quit
    Worker->>Test: execute
    Test-->>Worker: return / raise
    Worker->>CUDA: torch.cuda.synchronize()
    alt RuntimeError with "CUDA error:"
      Worker->>Worker: write traceback to stderr
      Worker->>PyTest_Master: os._exit(1) (terminate)
      note right of PyTest_Master: master restarts worker
    else no CUDA error
      Worker-->>PyTest_Master: report result
    end
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • QiJune
  • chzblych
  • yilin-void
  • syuoni

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@tongyuantongyu tongyuantongyu requested a review from QiJune August 26, 2025 09:12
@tongyuantongyu
Copy link
Member Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16544 [ run ] triggered by Bot

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
tests/integration/defs/trt_test_alternative.py (1)

1-1: Add required NVIDIA SPDX header

Per repository guidelines, prepend the NVIDIA copyright header to all Python sources.

Apply this diff:

+# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+
tests/unittest/_torch/speculative/test_eagle3.py (1)

1-1: Add required NVIDIA SPDX header

This file is missing the NVIDIA header required by coding guidelines.

Apply this diff:

+# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+
tests/unittest/_torch/speculative/test_ngram.py (1)

1-1: Add required NVIDIA SPDX header

Please prepend the standard NVIDIA header to comply with project policy.

Apply this diff:

+# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+
🧹 Nitpick comments (12)
tests/integration/defs/test_unittests.py (1)

104-106: Fix long line (E501) and make warning more explicit

Wrap the message to satisfy the 120-col limit and set an explicit category/stacklevel so the warning is easier to triage.

Apply this diff:

-        warnings.warn(
-            f'Cannot find parallel config entry for unittest {case} on "{gpu_name}". Fallback to serial test. Please add config entry to agg_unit_mem_df.csv.'
-        )
+        warnings.warn(
+            (
+                f'Cannot find parallel config entry for unittest {case} on "{gpu_name}". '
+                "Fallback to serial test. "
+                "Please add config entry to agg_unit_mem_df.csv."
+            ),
+            category=UserWarning,
+            stacklevel=1,
+        )
tests/integration/defs/trt_test_alternative.py (1)

104-111: Make inductor worker detection more robust

The pickler argument may appear as a separate “--pickler” arg or combined “--pickler=…”. Matching on an exact position and string is brittle. Recommend detecting with any() across the cmdline.

Apply this diff:

-                        # Detect repetitive torch inductor worker processes
-                        if len(cmdline) > 3 and \
-                            'python' in cmdline[0] and \
-                            'torch/_inductor/compile_worker/__main__.py' in cmdline[1] and \
-                            '--pickler=torch._inductor.compile_worker.subproc_pool.SubprocPickler' == cmdline[2]:
-                            torch_inductors.append(pid)
-                            continue
+                        # Detect repetitive torch inductor worker processes
+                        if (
+                            any('torch/_inductor/compile_worker/__main__.py' in a for a in cmdline)
+                            and any(
+                                (a == '--pickler' or a.startswith('--pickler=')) and
+                                'torch._inductor.compile_worker.subproc_pool.SubprocPickler' in a
+                                for a in cmdline
+                            )
+                        ):
+                            torch_inductors.append(pid)
+                            continue
tests/unittest/_torch/speculative/test_eagle3.py (1)

192-206: Optional: align DeepSeek test with token-based KV sizing for consistency

The second test still uses free_gpu_memory_fraction=0.5. Since KvCacheConfig supports both knobs, it’s fine, but consider migrating to max_tokens as done above for consistency and easier cross-machine repros.

If you want, I can propose a max_tokens value derived from the model’s head count, dtype, and max_seq_len to approximate the current memory target.

tests/unittest/_torch/speculative/test_dynamic_spec_decode.py (3)

64-72: Remove duplicate sampling_params assignment to avoid confusion.

You set sampling_params twice; the first (max_tokens=128) is immediately overwritten (max_tokens=10). Drop the earlier assignment.

-        sampling_params = SamplingParams(max_tokens=128, temperature=0)
         ...
         sampling_params = SamplingParams(max_tokens=10, temperature=0)

29-44: Optionally derive max_tokens from test constants to keep them in sync.

To avoid future drift, consider tying KvCacheConfig(max_tokens=...) to the same constant used for max_seq_len.

-    kv_cache_config = KvCacheConfig(enable_block_reuse=True, max_tokens=8192)
+    TEST_MAX_SEQ_LEN = 8192
+    kv_cache_config = KvCacheConfig(enable_block_reuse=True, max_tokens=TEST_MAX_SEQ_LEN)
     ...
-        max_seq_len=8192,
+        max_seq_len=TEST_MAX_SEQ_LEN,

1-3: Add NVIDIA copyright header (2025).

Tests also require the header per repo guidelines.

+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
 import os
 import sys
 import unittest
tests/unittest/_torch/speculative/test_kv_cache_reuse.py (2)

39-51: Optionally compute max_tokens from max_seq_len to avoid mismatches.

Minor readability/maintainability win: reuse a shared constant for both max_tokens and max_seq_len.

-    kv_cache_config = KvCacheConfig(enable_block_reuse=True, max_tokens=8192)
+    TEST_MAX_SEQ_LEN = 8192
+    kv_cache_config = KvCacheConfig(enable_block_reuse=True, max_tokens=TEST_MAX_SEQ_LEN)
     ...
-        max_seq_len=8192,
+        max_seq_len=TEST_MAX_SEQ_LEN,

1-3: Add NVIDIA copyright header (2025).

Required by coding guidelines.

+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
 import os
 import sys
 import unittest
tests/unittest/_torch/speculative/test_draft_target.py (2)

35-44: Optional: unify token-related constants.

Consider introducing a single TEST_MAX_TOKENS for kv cache and generation limits when appropriate, to make the relationship explicit.

-    kv_cache_config = KvCacheConfig(enable_block_reuse=False, max_tokens=8192)
+    TEST_MAX_TOKENS = 8192
+    kv_cache_config = KvCacheConfig(enable_block_reuse=False, max_tokens=TEST_MAX_TOKENS)
     ...
-        max_num_tokens=2048,
+        max_num_tokens=2048,  # subset of TEST_MAX_TOKENS

1-3: Add NVIDIA copyright header (2025).

Applies to tests as well.

+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
 import os
 import sys
 import unittest
tests/unittest/_torch/speculative/test_user_provided.py (2)

35-44: Nit: avoid the backslash for line continuation in dict literal.

PEP 8 prefers implicit continuation inside parentheses; the backslash is unnecessary here.

-    llm_common_config = dict( \
+    llm_common_config = dict(
         model=llm_models_root() / "llama-3.1-model" /"Meta-Llama-3.1-8B",
         backend='pytorch',
         attn_backend=attn_backend,
         disable_overlap_scheduler=disable_overlap_scheduler,
         cuda_graph_config=cuda_graph_config,
         max_batch_size=max_batch_size,
         kv_cache_config=kv_cache_config,
         max_num_tokens=2048,

1-3: Add NVIDIA copyright header (2025).

Per coding guidelines.

+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
 import os
 import sys
 import unittest
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between a142c0c and 9de2020.

⛔ Files ignored due to path filters (1)
  • tests/integration/defs/agg_unit_mem_df.csv is excluded by !**/*.csv
📒 Files selected for processing (9)
  • tests/integration/defs/test_unittests.py (2 hunks)
  • tests/integration/defs/trt_test_alternative.py (1 hunks)
  • tests/unittest/_torch/speculative/test_draft_target.py (1 hunks)
  • tests/unittest/_torch/speculative/test_dynamic_spec_decode.py (1 hunks)
  • tests/unittest/_torch/speculative/test_eagle3.py (1 hunks)
  • tests/unittest/_torch/speculative/test_kv_cache_reuse.py (1 hunks)
  • tests/unittest/_torch/speculative/test_ngram.py (1 hunks)
  • tests/unittest/_torch/speculative/test_user_provided.py (1 hunks)
  • tests/unittest/conftest.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code must target Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Preserve module namespaces when importing; import modules/packages and access members via the module (e.g., from package.subpackage import foo; foo.SomeClass())
Python file names should be snake_case
Python class names should be PascalCase
Python functions/methods and local variables should be snake_case; variables beginning with a number should be prefixed with k_ (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE prefixed with G_ (e.g., G_MY_GLOBAL); constants should be UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments should be reserved for in-function or file-local interfaces
Use Google-style docstrings for classes and functions; attributes and variables may be documented inline with trailing string literals
Avoid reflection when simpler, explicit code suffices (e.g., avoid dict(**locals()) patterns)
In try/except, catch the narrowest exceptions possible
For duck-typing patterns, keep the try body minimal and move logic to else to avoid masking unrelated failures

Files:

  • tests/integration/defs/test_unittests.py
  • tests/unittest/conftest.py
  • tests/unittest/_torch/speculative/test_kv_cache_reuse.py
  • tests/unittest/_torch/speculative/test_user_provided.py
  • tests/unittest/_torch/speculative/test_dynamic_spec_decode.py
  • tests/unittest/_torch/speculative/test_eagle3.py
  • tests/unittest/_torch/speculative/test_draft_target.py
  • tests/unittest/_torch/speculative/test_ngram.py
  • tests/integration/defs/trt_test_alternative.py
**/*.{c,cc,cpp,cxx,h,hh,hpp,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)

Files:

  • tests/integration/defs/test_unittests.py
  • tests/unittest/conftest.py
  • tests/unittest/_torch/speculative/test_kv_cache_reuse.py
  • tests/unittest/_torch/speculative/test_user_provided.py
  • tests/unittest/_torch/speculative/test_dynamic_spec_decode.py
  • tests/unittest/_torch/speculative/test_eagle3.py
  • tests/unittest/_torch/speculative/test_draft_target.py
  • tests/unittest/_torch/speculative/test_ngram.py
  • tests/integration/defs/trt_test_alternative.py
🧠 Learnings (1)
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
PR: NVIDIA/TensorRT-LLM#6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.

Applied to files:

  • tests/unittest/_torch/speculative/test_kv_cache_reuse.py
  • tests/unittest/_torch/speculative/test_user_provided.py
  • tests/unittest/_torch/speculative/test_dynamic_spec_decode.py
  • tests/unittest/_torch/speculative/test_draft_target.py
  • tests/unittest/_torch/speculative/test_ngram.py
🧬 Code graph analysis (3)
tests/unittest/_torch/speculative/test_user_provided.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
  • KvCacheConfig (946-1077)
tests/unittest/_torch/speculative/test_draft_target.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
  • KvCacheConfig (946-1077)
tests/unittest/_torch/speculative/test_ngram.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
  • KvCacheConfig (946-1077)
🪛 Ruff (0.12.2)
tests/integration/defs/test_unittests.py

105-105: Line too long (158 > 120)

(E501)

tests/unittest/conftest.py

106-108: 1 blank line required between summary line and description

(D205)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (8)
tests/integration/defs/test_unittests.py (1)

16-16: Good switch to warnings module for non-fatal notices

Importing warnings aligns with the new warn-based flow instead of print. No issues.

tests/integration/defs/trt_test_alternative.py (1)

117-121: Nice: summarize detected inductor workers separately

Clearer diagnostics without changing cleanup semantics. LGTM.

tests/unittest/_torch/speculative/test_eagle3.py (1)

49-49: KvCacheConfig: switch to explicit max_tokens looks good

Moving from implicit memory-fraction control to a token-based limit improves reproducibility across hosts. No functional issues seen.

tests/unittest/_torch/speculative/test_ngram.py (1)

30-30: KvCacheConfig token-based sizing LGTM

Explicit max_tokens=8192 is consistent with the rest of this PR and with the Llama3.1 test parameters. No concerns.

tests/unittest/_torch/speculative/test_dynamic_spec_decode.py (1)

29-29: KvCacheConfig(max_tokens) switch looks correct and aligns with llmapi.

Using max_tokens=8192 is consistent with max_seq_len=8192 specified later in llm_common_config. No functional concerns here.

tests/unittest/_torch/speculative/test_kv_cache_reuse.py (1)

35-35: KvCacheConfig(max_tokens) update is appropriate for this test.

Setting max_tokens=8192 aligns with the configured max_seq_len=8192 below, ensuring reuse capacity is sufficient for a single-request run.

tests/unittest/_torch/speculative/test_draft_target.py (1)

31-31: KvCacheConfig now takes max_tokens: change looks good.

enable_block_reuse=False is intentional here and the 8192 token budget comfortably covers max_num_tokens=2048 used later.

tests/unittest/_torch/speculative/test_user_provided.py (1)

31-31: KvCacheConfig(max_tokens) update is consistent across the suite.

Looks good with enable_block_reuse=False for this user-provided drafter scenario.

@tongyuantongyu tongyuantongyu force-pushed the ytong/unittest_parallel_update branch from 9de2020 to 124ed81 Compare August 26, 2025 09:32
@QiJune QiJune requested review from chzblych and joyang-nv August 26, 2025 09:38
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/unittest/_torch/speculative/test_kv_cache_reuse.py (1)

1-3: Missing SPDX/NVIDIA header

This file is missing the required NVIDIA SPDX header with the current year. Please add it to comply with repository policy.

Proposed header to prepend:

# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
♻️ Duplicate comments (1)
tests/unittest/conftest.py (1)

121-125: Fix traceback.print_exception for Python 3.8 compatibility

traceback.print_exception(e, ...) isn’t valid on Python 3.8; it requires (etype, value, tb). This will raise a TypeError and mask the original CUDA error in 3.8-based runners.

Apply this diff:

-                    traceback.print_exception(e, file=sys.stderr)
+                    traceback.print_exception(type(e), e, e.__traceback__, file=sys.stderr)

Also consider applying the same change in pytest_pyfunc_call for consistency (see note below).

🧹 Nitpick comments (5)
tests/integration/defs/test_unittests.py (2)

104-106: Break overly long warning line (E501) and add category/stacklevel

The warning message currently exceeds the 120-char limit and lacks an explicit category/stacklevel. Adding these improves lint compliance and call-site attribution in pytest output.

Apply this diff:

-        warnings.warn(
-            f'Cannot find parallel config entry for unittest {case} on "{gpu_name}". Fallback to serial test. Please add config entry to agg_unit_mem_df.csv.'
-        )
+        warnings.warn(
+            (
+                f'Cannot find parallel config entry for unittest {case} on "{gpu_name}". '
+                'Fallback to serial test. Please add config entry to agg_unit_mem_df.csv.'
+            ),
+            category=UserWarning,
+            stacklevel=1,
+        )

1-2: Update SPDX year range to include 2025

Per the repo guideline “Prepend the NVIDIA copyright header (current year)”, update the header to 2022–2025.

tests/unittest/conftest.py (3)

106-112: Tighten and polish fixture docstring (grammar/clarity)

Minor grammar fixes and clearer phrasing.

Apply this diff:

-    """
-    Fixture to handle CUDA error.
-
-    CUDA error are usually persistent that requires restart process to recover.
-    Immediately stop the current worker when CUDA error occurred.
-    It will then be restarted by the master process.
-    """
+    """
+    Handle persistent CUDA errors in xdist workers.
+
+    CUDA errors are usually persistent and require a process restart to recover.
+    Immediately stop the current worker when a CUDA error occurs; the master
+    process will then restart the worker.
+    """

60-70: Mirror the traceback fix in pytest_pyfunc_call

For consistency and Python 3.8 support, update traceback.print_exception(e) here as well.

Example change:

# Replace:
traceback.print_exception(e)

# With:
traceback.print_exception(type(e), e, e.__traceback__)

1-2: Update SPDX year range to include 2025

Please update 2022–2024 to 2022–2025 to match the current year.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 9de2020 and 124ed81.

⛔ Files ignored due to path filters (1)
  • tests/integration/defs/agg_unit_mem_df.csv is excluded by !**/*.csv
📒 Files selected for processing (9)
  • tests/integration/defs/test_unittests.py (2 hunks)
  • tests/integration/defs/trt_test_alternative.py (1 hunks)
  • tests/unittest/_torch/speculative/test_draft_target.py (1 hunks)
  • tests/unittest/_torch/speculative/test_dynamic_spec_decode.py (1 hunks)
  • tests/unittest/_torch/speculative/test_eagle3.py (2 hunks)
  • tests/unittest/_torch/speculative/test_kv_cache_reuse.py (1 hunks)
  • tests/unittest/_torch/speculative/test_ngram.py (1 hunks)
  • tests/unittest/_torch/speculative/test_user_provided.py (1 hunks)
  • tests/unittest/conftest.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
  • tests/unittest/_torch/speculative/test_user_provided.py
  • tests/unittest/_torch/speculative/test_draft_target.py
  • tests/unittest/_torch/speculative/test_ngram.py
  • tests/unittest/_torch/speculative/test_eagle3.py
  • tests/integration/defs/trt_test_alternative.py
  • tests/unittest/_torch/speculative/test_dynamic_spec_decode.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code must target Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Preserve module namespaces when importing; import modules/packages and access members via the module (e.g., from package.subpackage import foo; foo.SomeClass())
Python file names should be snake_case
Python class names should be PascalCase
Python functions/methods and local variables should be snake_case; variables beginning with a number should be prefixed with k_ (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE prefixed with G_ (e.g., G_MY_GLOBAL); constants should be UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments should be reserved for in-function or file-local interfaces
Use Google-style docstrings for classes and functions; attributes and variables may be documented inline with trailing string literals
Avoid reflection when simpler, explicit code suffices (e.g., avoid dict(**locals()) patterns)
In try/except, catch the narrowest exceptions possible
For duck-typing patterns, keep the try body minimal and move logic to else to avoid masking unrelated failures

Files:

  • tests/unittest/_torch/speculative/test_kv_cache_reuse.py
  • tests/unittest/conftest.py
  • tests/integration/defs/test_unittests.py
**/*.{c,cc,cpp,cxx,h,hh,hpp,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)

Files:

  • tests/unittest/_torch/speculative/test_kv_cache_reuse.py
  • tests/unittest/conftest.py
  • tests/integration/defs/test_unittests.py
🧠 Learnings (1)
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
PR: NVIDIA/TensorRT-LLM#6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.

Applied to files:

  • tests/unittest/_torch/speculative/test_kv_cache_reuse.py
🧬 Code graph analysis (1)
tests/unittest/conftest.py (2)
cpp/tensorrt_llm/runtime/torchUtils.h (1)
  • cuda (129-132)
cpp/include/tensorrt_llm/runtime/cudaStream.h (1)
  • synchronize (81-96)
🪛 Ruff (0.12.2)
tests/integration/defs/test_unittests.py

105-105: Line too long (158 > 120)

(E501)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tests/integration/defs/test_unittests.py (1)

16-16: Import of warnings for surfaced fallback messaging — LGTM

Using warnings instead of print makes the fallback visible in CI while remaining non-fatal. No concerns.

tests/unittest/conftest.py (1)

104-129: Good guardrail: only active under CUDA + xdist

The conditional guard torch.cuda.is_available() and PYTEST_XDIST_WORKER is appropriate, and synchronizing post-test helps surface CUDA errors promptly. Nice.

@QiJune QiJune requested a review from mikeiovine August 26, 2025 10:07
@tensorrt-cicd
Copy link
Collaborator

PR_Github #16544 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12423 completed with status: 'SUCCESS'

@QiJune
Copy link
Collaborator

QiJune commented Aug 27, 2025

Before this PR:

  • 1497.52s call B200_PCIe-PyTorch-1/test_unittests.py::test_unittests_v2[unittest/_torch/speculative]
  • 2729.29s call B200_PCIe-PyTorch-2/test_unittests.py::test_unittests_v2[unittest/_torch/thop]

After parallelizing unit tests:

  • 721.99s call B200_PCIe-PyTorch-1/test_unittests.py::test_unittests_v2[unittest/_torch/speculative]
  • 434.44s call B200_PCIe-PyTorch-2/test_unittests.py::test_unittests_v2[unittest/_torch/thop]

@QiJune
Copy link
Collaborator

QiJune commented Aug 27, 2025

/bot run --gpu-type "B200_PCIe"

Copy link
Collaborator

@QiJune QiJune left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16609 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16609 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #12470 (Partly Tested) completed with status: 'SUCCESS'

@QiJune
Copy link
Collaborator

QiJune commented Aug 27, 2025

After this PR:

  • B200_PCIe-PyTorch-1: 1h 14m 10s
  • B200_PCIe-PyTorch-2: 1h 20m 24s

Slowest tests:

  • 910.61s call B200_PCIe-PyTorch-1/test_unittests.py::test_unittests_v2[unittest/_torch/speculative]
  • 551.59s call B200_PCIe-PyTorch-2/accuracy/test_llm_api_autodeploy.py::TestLlama3_1_8B::test_auto_dtype
  • 550.72s call B200_PCIe-PyTorch-2/test_unittests.py::test_unittests_v2[unittest/_torch/thop]
  • 544.27s call B200_PCIe-PyTorch-2/test_unittests.py::test_unittests_v2[unittest/_torch/misc]
  • 534.19s call B200_PCIe-PyTorch-2/test_unittests.py::test_unittests_v2[unittest/_torch/auto_deploy/unit/singlegpu -k "not test_trtllm_bench_backend_comparison"]
  • 415.74s call B200_PCIe-PyTorch-1/test_unittests.py::test_unittests_v2[unittest/_torch/modeling -k "modeling_deepseek"]

@QiJune
Copy link
Collaborator

QiJune commented Aug 27, 2025

/bot skip --comment "B200 pipeline passed"

@QiJune QiJune enabled auto-merge (squash) August 27, 2025 04:17
@tensorrt-cicd
Copy link
Collaborator

PR_Github #16633 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #16633 [ skip ] completed with state SUCCESS
Skipping testing for commit 21a2638

@QiJune QiJune merged commit 6c7813e into NVIDIA:main Aug 27, 2025
4 checks passed
@tongyuantongyu tongyuantongyu deleted the ytong/unittest_parallel_update branch August 27, 2025 10:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants