-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[TRTLLM-7457][ci] Update & cleanup unittest parallel config #7254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TRTLLM-7457][ci] Update & cleanup unittest parallel config #7254
Conversation
📝 WalkthroughWalkthroughReplace/extend KvCacheConfig usage across unit tests to include a new max_tokens parameter, add an autouse pytest fixture that early-exits xdist workers on CUDA errors, improve Linux cleanup reporting to summarize PyTorch inductor worker PIDs, and change a test print to use warnings.warn. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant PyTest_Master as PyTest Master
participant Worker as xdist Worker
participant Test as Test Function
participant CUDA as torch.cuda
PyTest_Master->>Worker: dispatch test
rect rgba(230,240,255,0.6)
note right of Worker: autouse fixture: cuda_error_early_quit
Worker->>Test: execute
Test-->>Worker: return / raise
Worker->>CUDA: torch.cuda.synchronize()
alt RuntimeError with "CUDA error:"
Worker->>Worker: write traceback to stderr
Worker->>PyTest_Master: os._exit(1) (terminate)
note right of PyTest_Master: master restarts worker
else no CUDA error
Worker-->>PyTest_Master: report result
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
|
/bot run |
|
PR_Github #16544 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
tests/integration/defs/trt_test_alternative.py (1)
1-1: Add required NVIDIA SPDX headerPer repository guidelines, prepend the NVIDIA copyright header to all Python sources.
Apply this diff:
+# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +tests/unittest/_torch/speculative/test_eagle3.py (1)
1-1: Add required NVIDIA SPDX headerThis file is missing the NVIDIA header required by coding guidelines.
Apply this diff:
+# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +tests/unittest/_torch/speculative/test_ngram.py (1)
1-1: Add required NVIDIA SPDX headerPlease prepend the standard NVIDIA header to comply with project policy.
Apply this diff:
+# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +
🧹 Nitpick comments (12)
tests/integration/defs/test_unittests.py (1)
104-106: Fix long line (E501) and make warning more explicitWrap the message to satisfy the 120-col limit and set an explicit category/stacklevel so the warning is easier to triage.
Apply this diff:
- warnings.warn( - f'Cannot find parallel config entry for unittest {case} on "{gpu_name}". Fallback to serial test. Please add config entry to agg_unit_mem_df.csv.' - ) + warnings.warn( + ( + f'Cannot find parallel config entry for unittest {case} on "{gpu_name}". ' + "Fallback to serial test. " + "Please add config entry to agg_unit_mem_df.csv." + ), + category=UserWarning, + stacklevel=1, + )tests/integration/defs/trt_test_alternative.py (1)
104-111: Make inductor worker detection more robustThe pickler argument may appear as a separate “--pickler” arg or combined “--pickler=…”. Matching on an exact position and string is brittle. Recommend detecting with any() across the cmdline.
Apply this diff:
- # Detect repetitive torch inductor worker processes - if len(cmdline) > 3 and \ - 'python' in cmdline[0] and \ - 'torch/_inductor/compile_worker/__main__.py' in cmdline[1] and \ - '--pickler=torch._inductor.compile_worker.subproc_pool.SubprocPickler' == cmdline[2]: - torch_inductors.append(pid) - continue + # Detect repetitive torch inductor worker processes + if ( + any('torch/_inductor/compile_worker/__main__.py' in a for a in cmdline) + and any( + (a == '--pickler' or a.startswith('--pickler=')) and + 'torch._inductor.compile_worker.subproc_pool.SubprocPickler' in a + for a in cmdline + ) + ): + torch_inductors.append(pid) + continuetests/unittest/_torch/speculative/test_eagle3.py (1)
192-206: Optional: align DeepSeek test with token-based KV sizing for consistencyThe second test still uses free_gpu_memory_fraction=0.5. Since KvCacheConfig supports both knobs, it’s fine, but consider migrating to max_tokens as done above for consistency and easier cross-machine repros.
If you want, I can propose a max_tokens value derived from the model’s head count, dtype, and max_seq_len to approximate the current memory target.
tests/unittest/_torch/speculative/test_dynamic_spec_decode.py (3)
64-72: Remove duplicate sampling_params assignment to avoid confusion.You set sampling_params twice; the first (max_tokens=128) is immediately overwritten (max_tokens=10). Drop the earlier assignment.
- sampling_params = SamplingParams(max_tokens=128, temperature=0) ... sampling_params = SamplingParams(max_tokens=10, temperature=0)
29-44: Optionally derive max_tokens from test constants to keep them in sync.To avoid future drift, consider tying KvCacheConfig(max_tokens=...) to the same constant used for max_seq_len.
- kv_cache_config = KvCacheConfig(enable_block_reuse=True, max_tokens=8192) + TEST_MAX_SEQ_LEN = 8192 + kv_cache_config = KvCacheConfig(enable_block_reuse=True, max_tokens=TEST_MAX_SEQ_LEN) ... - max_seq_len=8192, + max_seq_len=TEST_MAX_SEQ_LEN,
1-3: Add NVIDIA copyright header (2025).Tests also require the header per repo guidelines.
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. import os import sys import unittesttests/unittest/_torch/speculative/test_kv_cache_reuse.py (2)
39-51: Optionally compute max_tokens from max_seq_len to avoid mismatches.Minor readability/maintainability win: reuse a shared constant for both max_tokens and max_seq_len.
- kv_cache_config = KvCacheConfig(enable_block_reuse=True, max_tokens=8192) + TEST_MAX_SEQ_LEN = 8192 + kv_cache_config = KvCacheConfig(enable_block_reuse=True, max_tokens=TEST_MAX_SEQ_LEN) ... - max_seq_len=8192, + max_seq_len=TEST_MAX_SEQ_LEN,
1-3: Add NVIDIA copyright header (2025).Required by coding guidelines.
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. import os import sys import unittesttests/unittest/_torch/speculative/test_draft_target.py (2)
35-44: Optional: unify token-related constants.Consider introducing a single TEST_MAX_TOKENS for kv cache and generation limits when appropriate, to make the relationship explicit.
- kv_cache_config = KvCacheConfig(enable_block_reuse=False, max_tokens=8192) + TEST_MAX_TOKENS = 8192 + kv_cache_config = KvCacheConfig(enable_block_reuse=False, max_tokens=TEST_MAX_TOKENS) ... - max_num_tokens=2048, + max_num_tokens=2048, # subset of TEST_MAX_TOKENS
1-3: Add NVIDIA copyright header (2025).Applies to tests as well.
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. import os import sys import unittesttests/unittest/_torch/speculative/test_user_provided.py (2)
35-44: Nit: avoid the backslash for line continuation in dict literal.PEP 8 prefers implicit continuation inside parentheses; the backslash is unnecessary here.
- llm_common_config = dict( \ + llm_common_config = dict( model=llm_models_root() / "llama-3.1-model" /"Meta-Llama-3.1-8B", backend='pytorch', attn_backend=attn_backend, disable_overlap_scheduler=disable_overlap_scheduler, cuda_graph_config=cuda_graph_config, max_batch_size=max_batch_size, kv_cache_config=kv_cache_config, max_num_tokens=2048,
1-3: Add NVIDIA copyright header (2025).Per coding guidelines.
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. import os import sys import unittest
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
tests/integration/defs/agg_unit_mem_df.csvis excluded by!**/*.csv
📒 Files selected for processing (9)
tests/integration/defs/test_unittests.py(2 hunks)tests/integration/defs/trt_test_alternative.py(1 hunks)tests/unittest/_torch/speculative/test_draft_target.py(1 hunks)tests/unittest/_torch/speculative/test_dynamic_spec_decode.py(1 hunks)tests/unittest/_torch/speculative/test_eagle3.py(1 hunks)tests/unittest/_torch/speculative/test_kv_cache_reuse.py(1 hunks)tests/unittest/_torch/speculative/test_ngram.py(1 hunks)tests/unittest/_torch/speculative/test_user_provided.py(1 hunks)tests/unittest/conftest.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Code must target Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Preserve module namespaces when importing; import modules/packages and access members via the module (e.g., from package.subpackage import foo; foo.SomeClass())
Python file names should be snake_case
Python class names should be PascalCase
Python functions/methods and local variables should be snake_case; variables beginning with a number should be prefixed with k_ (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE prefixed with G_ (e.g., G_MY_GLOBAL); constants should be UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments should be reserved for in-function or file-local interfaces
Use Google-style docstrings for classes and functions; attributes and variables may be documented inline with trailing string literals
Avoid reflection when simpler, explicit code suffices (e.g., avoid dict(**locals()) patterns)
In try/except, catch the narrowest exceptions possible
For duck-typing patterns, keep the try body minimal and move logic to else to avoid masking unrelated failures
Files:
tests/integration/defs/test_unittests.pytests/unittest/conftest.pytests/unittest/_torch/speculative/test_kv_cache_reuse.pytests/unittest/_torch/speculative/test_user_provided.pytests/unittest/_torch/speculative/test_dynamic_spec_decode.pytests/unittest/_torch/speculative/test_eagle3.pytests/unittest/_torch/speculative/test_draft_target.pytests/unittest/_torch/speculative/test_ngram.pytests/integration/defs/trt_test_alternative.py
**/*.{c,cc,cpp,cxx,h,hh,hpp,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)
Files:
tests/integration/defs/test_unittests.pytests/unittest/conftest.pytests/unittest/_torch/speculative/test_kv_cache_reuse.pytests/unittest/_torch/speculative/test_user_provided.pytests/unittest/_torch/speculative/test_dynamic_spec_decode.pytests/unittest/_torch/speculative/test_eagle3.pytests/unittest/_torch/speculative/test_draft_target.pytests/unittest/_torch/speculative/test_ngram.pytests/integration/defs/trt_test_alternative.py
🧠 Learnings (1)
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
PR: NVIDIA/TensorRT-LLM#6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.
Applied to files:
tests/unittest/_torch/speculative/test_kv_cache_reuse.pytests/unittest/_torch/speculative/test_user_provided.pytests/unittest/_torch/speculative/test_dynamic_spec_decode.pytests/unittest/_torch/speculative/test_draft_target.pytests/unittest/_torch/speculative/test_ngram.py
🧬 Code graph analysis (3)
tests/unittest/_torch/speculative/test_user_provided.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
KvCacheConfig(946-1077)
tests/unittest/_torch/speculative/test_draft_target.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
KvCacheConfig(946-1077)
tests/unittest/_torch/speculative/test_ngram.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
KvCacheConfig(946-1077)
🪛 Ruff (0.12.2)
tests/integration/defs/test_unittests.py
105-105: Line too long (158 > 120)
(E501)
tests/unittest/conftest.py
106-108: 1 blank line required between summary line and description
(D205)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (8)
tests/integration/defs/test_unittests.py (1)
16-16: Good switch to warnings module for non-fatal noticesImporting warnings aligns with the new warn-based flow instead of print. No issues.
tests/integration/defs/trt_test_alternative.py (1)
117-121: Nice: summarize detected inductor workers separatelyClearer diagnostics without changing cleanup semantics. LGTM.
tests/unittest/_torch/speculative/test_eagle3.py (1)
49-49: KvCacheConfig: switch to explicit max_tokens looks goodMoving from implicit memory-fraction control to a token-based limit improves reproducibility across hosts. No functional issues seen.
tests/unittest/_torch/speculative/test_ngram.py (1)
30-30: KvCacheConfig token-based sizing LGTMExplicit max_tokens=8192 is consistent with the rest of this PR and with the Llama3.1 test parameters. No concerns.
tests/unittest/_torch/speculative/test_dynamic_spec_decode.py (1)
29-29: KvCacheConfig(max_tokens) switch looks correct and aligns with llmapi.Using max_tokens=8192 is consistent with max_seq_len=8192 specified later in llm_common_config. No functional concerns here.
tests/unittest/_torch/speculative/test_kv_cache_reuse.py (1)
35-35: KvCacheConfig(max_tokens) update is appropriate for this test.Setting max_tokens=8192 aligns with the configured max_seq_len=8192 below, ensuring reuse capacity is sufficient for a single-request run.
tests/unittest/_torch/speculative/test_draft_target.py (1)
31-31: KvCacheConfig now takes max_tokens: change looks good.enable_block_reuse=False is intentional here and the 8192 token budget comfortably covers max_num_tokens=2048 used later.
tests/unittest/_torch/speculative/test_user_provided.py (1)
31-31: KvCacheConfig(max_tokens) update is consistent across the suite.Looks good with enable_block_reuse=False for this user-provided drafter scenario.
Signed-off-by: Yuan Tong <[email protected]>
9de2020 to
124ed81
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/unittest/_torch/speculative/test_kv_cache_reuse.py (1)
1-3: Missing SPDX/NVIDIA headerThis file is missing the required NVIDIA SPDX header with the current year. Please add it to comply with repository policy.
Proposed header to prepend:
# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. # SPDX-License-Identifier: Apache-2.0
♻️ Duplicate comments (1)
tests/unittest/conftest.py (1)
121-125: Fix traceback.print_exception for Python 3.8 compatibilitytraceback.print_exception(e, ...) isn’t valid on Python 3.8; it requires (etype, value, tb). This will raise a TypeError and mask the original CUDA error in 3.8-based runners.
Apply this diff:
- traceback.print_exception(e, file=sys.stderr) + traceback.print_exception(type(e), e, e.__traceback__, file=sys.stderr)Also consider applying the same change in pytest_pyfunc_call for consistency (see note below).
🧹 Nitpick comments (5)
tests/integration/defs/test_unittests.py (2)
104-106: Break overly long warning line (E501) and add category/stacklevelThe warning message currently exceeds the 120-char limit and lacks an explicit category/stacklevel. Adding these improves lint compliance and call-site attribution in pytest output.
Apply this diff:
- warnings.warn( - f'Cannot find parallel config entry for unittest {case} on "{gpu_name}". Fallback to serial test. Please add config entry to agg_unit_mem_df.csv.' - ) + warnings.warn( + ( + f'Cannot find parallel config entry for unittest {case} on "{gpu_name}". ' + 'Fallback to serial test. Please add config entry to agg_unit_mem_df.csv.' + ), + category=UserWarning, + stacklevel=1, + )
1-2: Update SPDX year range to include 2025Per the repo guideline “Prepend the NVIDIA copyright header (current year)”, update the header to 2022–2025.
tests/unittest/conftest.py (3)
106-112: Tighten and polish fixture docstring (grammar/clarity)Minor grammar fixes and clearer phrasing.
Apply this diff:
- """ - Fixture to handle CUDA error. - - CUDA error are usually persistent that requires restart process to recover. - Immediately stop the current worker when CUDA error occurred. - It will then be restarted by the master process. - """ + """ + Handle persistent CUDA errors in xdist workers. + + CUDA errors are usually persistent and require a process restart to recover. + Immediately stop the current worker when a CUDA error occurs; the master + process will then restart the worker. + """
60-70: Mirror the traceback fix in pytest_pyfunc_callFor consistency and Python 3.8 support, update traceback.print_exception(e) here as well.
Example change:
# Replace: traceback.print_exception(e) # With: traceback.print_exception(type(e), e, e.__traceback__)
1-2: Update SPDX year range to include 2025Please update 2022–2024 to 2022–2025 to match the current year.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
tests/integration/defs/agg_unit_mem_df.csvis excluded by!**/*.csv
📒 Files selected for processing (9)
tests/integration/defs/test_unittests.py(2 hunks)tests/integration/defs/trt_test_alternative.py(1 hunks)tests/unittest/_torch/speculative/test_draft_target.py(1 hunks)tests/unittest/_torch/speculative/test_dynamic_spec_decode.py(1 hunks)tests/unittest/_torch/speculative/test_eagle3.py(2 hunks)tests/unittest/_torch/speculative/test_kv_cache_reuse.py(1 hunks)tests/unittest/_torch/speculative/test_ngram.py(1 hunks)tests/unittest/_torch/speculative/test_user_provided.py(1 hunks)tests/unittest/conftest.py(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- tests/unittest/_torch/speculative/test_user_provided.py
- tests/unittest/_torch/speculative/test_draft_target.py
- tests/unittest/_torch/speculative/test_ngram.py
- tests/unittest/_torch/speculative/test_eagle3.py
- tests/integration/defs/trt_test_alternative.py
- tests/unittest/_torch/speculative/test_dynamic_spec_decode.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Code must target Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Preserve module namespaces when importing; import modules/packages and access members via the module (e.g., from package.subpackage import foo; foo.SomeClass())
Python file names should be snake_case
Python class names should be PascalCase
Python functions/methods and local variables should be snake_case; variables beginning with a number should be prefixed with k_ (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE prefixed with G_ (e.g., G_MY_GLOBAL); constants should be UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes; initialize all externally visible members in init
Prefer docstrings for interfaces used outside a file; comments should be reserved for in-function or file-local interfaces
Use Google-style docstrings for classes and functions; attributes and variables may be documented inline with trailing string literals
Avoid reflection when simpler, explicit code suffices (e.g., avoid dict(**locals()) patterns)
In try/except, catch the narrowest exceptions possible
For duck-typing patterns, keep the try body minimal and move logic to else to avoid masking unrelated failures
Files:
tests/unittest/_torch/speculative/test_kv_cache_reuse.pytests/unittest/conftest.pytests/integration/defs/test_unittests.py
**/*.{c,cc,cpp,cxx,h,hh,hpp,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA copyright header (current year) to all source files (.cpp, .h, .cu, .py, etc.)
Files:
tests/unittest/_torch/speculative/test_kv_cache_reuse.pytests/unittest/conftest.pytests/integration/defs/test_unittests.py
🧠 Learnings (1)
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
PR: NVIDIA/TensorRT-LLM#6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.
Applied to files:
tests/unittest/_torch/speculative/test_kv_cache_reuse.py
🧬 Code graph analysis (1)
tests/unittest/conftest.py (2)
cpp/tensorrt_llm/runtime/torchUtils.h (1)
cuda(129-132)cpp/include/tensorrt_llm/runtime/cudaStream.h (1)
synchronize(81-96)
🪛 Ruff (0.12.2)
tests/integration/defs/test_unittests.py
105-105: Line too long (158 > 120)
(E501)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tests/integration/defs/test_unittests.py (1)
16-16: Import of warnings for surfaced fallback messaging — LGTMUsing warnings instead of print makes the fallback visible in CI while remaining non-fatal. No concerns.
tests/unittest/conftest.py (1)
104-129: Good guardrail: only active under CUDA + xdistThe conditional guard torch.cuda.is_available() and PYTEST_XDIST_WORKER is appropriate, and synchronizing post-test helps surface CUDA errors promptly. Nice.
|
PR_Github #16544 [ run ] completed with state |
|
Before this PR:
After parallelizing unit tests:
|
|
/bot run --gpu-type "B200_PCIe" |
QiJune
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
PR_Github #16609 [ run ] triggered by Bot |
|
PR_Github #16609 [ run ] completed with state |
|
After this PR:
Slowest tests:
|
|
/bot skip --comment "B200 pipeline passed" |
|
PR_Github #16633 [ skip ] triggered by Bot |
|
PR_Github #16633 [ skip ] completed with state |
Summary by CodeRabbit
New Features
Tests
Description
Test Coverage
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.