Skip to content

Conversation

brb-nv
Copy link
Collaborator

@brb-nv brb-nv commented Aug 14, 2025

Description

Gemma3 27B tests are failing on GB200 due to missing kernel support for the vision encoder.
https://nvbugspro.nvidia.com/bug/5451391

This MR waives these tests. Long-term solution to be tracked with this: https://jirasw.nvidia.com/browse/TRTLLM-7237.
Also, updates a few Gemma3 tests to make them less memory-demanding.

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Tests

    • Updated test parameterization to skip specific multimodal cases in certain CI contexts, added CLI test parameters to exercise larger input and memory corner cases, and removed some automatic waivers so a previously-skipped accuracy variant now runs; test behavior and outcomes unchanged.
  • Chores

    • Extended public model initialization options to accept a GPU memory-fraction setting and larger batch/sequence size parameters.

@brb-nv brb-nv requested a review from a team as a code owner August 14, 2025 22:53
@brb-nv brb-nv requested a review from xinhe-nv August 14, 2025 22:53
Copy link
Contributor

coderabbitai bot commented Aug 14, 2025

📝 Walkthrough

Walkthrough

Adds skip markers to gemma-3-27b-it parameterizations in multimodal end-to-end tests, appends CLI args kv_cache_fraction=0.5 and max_seq_len=1024 to several multimodal quickstart commands, adds free_gpu_memory_fraction to KvCacheConfig usage and max_batch_size/max_seq_len to LLM usage in an accuracy test, and removes two waiver lines.

Changes

Cohort / File(s) Summary of Changes
E2E multimodal test parameterization
tests/integration/defs/test_e2e.py
Converted gemma-3-27b-it tuple entries to pytest.param(..., marks=...) adding skip_post_blackwell (and in one case skip_less_device_memory(80000)). Appended CLI args kv_cache_fraction=0.5 and max_seq_len=1024 to several quickstart multimodal commands.
Accuracy test API usage & signatures
tests/integration/defs/accuracy/test_llm_api_pytorch.py
Updated test to construct KvCacheConfig(..., free_gpu_memory_fraction=0.5) and to call LLM(..., max_batch_size=128, max_seq_len=4096). These changes reflect added/accepted kwargs on KvCacheConfig.__init__ and LLM.__init__.
Test waivers list
tests/integration/test_lists/waives.txt
Removed two waiver lines targeting accuracy/test_llm_api_pytorch.py::TestMistralSmall24B::test_auto_dtype (bracketed scratch-path variants); no additions.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • EmmaQiaoCh
  • venkywonka
  • liji-nv
  • yiqingy0

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 789a00d and f6be337.

📒 Files selected for processing (3)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
  • tests/integration/defs/test_e2e.py (6 hunks)
  • tests/integration/test_lists/waives.txt (0 hunks)
💤 Files with no reviewable changes (1)
  • tests/integration/test_lists/waives.txt
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tests/integration/defs/test_e2e.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/integration/test_lists/waives.txt (1)

288-290: Append Jira tracking ID for traceability.

Add TRTLLM-7237 to the waiver reason so it’s easy to discover and remove once fixed.

-full:GB200/test_e2e.py::test_ptp_quickstart_multimodal[gemma-3-27b-it-gemma/gemma-3-27b-it-image-True] SKIP (https://nvbugs/5451391)
-full:GB200/test_e2e.py::test_ptp_quickstart_multimodal_2gpu[gemma-3-27b-it-gemma/gemma-3-27b-it] SKIP (https://nvbugs/5451391)
-full:GB200/test_e2e.py::test_ptp_quickstart_multimodal_multiturn[gemma-3-27b-it-gemma/gemma-3-27b-it] SKIP (https://nvbugs/5451391)
+full:GB200/test_e2e.py::test_ptp_quickstart_multimodal[gemma-3-27b-it-gemma/gemma-3-27b-it-image-True] SKIP (https://nvbugs/5451391; tracked via TRTLLM-7237)
+full:GB200/test_e2e.py::test_ptp_quickstart_multimodal_2gpu[gemma-3-27b-it-gemma/gemma-3-27b-it] SKIP (https://nvbugs/5451391; tracked via TRTLLM-7237)
+full:GB200/test_e2e.py::test_ptp_quickstart_multimodal_multiturn[gemma-3-27b-it-gemma/gemma-3-27b-it] SKIP (https://nvbugs/5451391; tracked via TRTLLM-7237)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between a8618b2 and 1837c4c.

📒 Files selected for processing (1)
  • tests/integration/test_lists/waives.txt (1 hunks)
🔇 Additional comments (1)
tests/integration/test_lists/waives.txt (1)

288-290: Waiver scope and rationale look correct; change aligns with PR intent.

Entries are GB200-scoped only, target the Gemma-3 27B multimodal quickstart variants, and reference the NV bug. No API/test logic changes introduced.

@brb-nv brb-nv force-pushed the user/brb/waive-gb200-tests branch 2 times, most recently from 03b7254 to 5083f8c Compare August 15, 2025 15:49
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/integration/defs/test_e2e.py (1)

2129-2131: Optional: Add inline reference to NVBug/Jira to ease future un-waive

Add a short comment pointing to NVBug 5451391 and TRTLLM-7237 near the mark so the waiver’s reason and cleanup trigger are obvious to future maintainers.

-    pytest.param("gemma-3-27b-it",
-                 "gemma/gemma-3-27b-it",
-                 marks=(skip_post_blackwell,
-                        pytest.mark.skip_less_device_memory(80000))),
+    pytest.param("gemma-3-27b-it",
+                 "gemma/gemma-3-27b-it",
+                 marks=(skip_post_blackwell,
+                        pytest.mark.skip_less_device_memory(80000))),  # TEMP: waive on GB200 due to missing vision encoder kernels (NVBug 5451391); track removal via TRTLLM-7237
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 03b7254 and 5083f8c.

📒 Files selected for processing (1)
  • tests/integration/defs/test_e2e.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/integration/defs/test_e2e.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/integration/defs/test_e2e.py
🧠 Learnings (1)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/integration/defs/test_e2e.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (3)
tests/integration/defs/test_e2e.py (3)

2129-2131: LGTM: Waiver correctly scoped to GB200 (Blackwell) for Gemma3 27B

Using pytest.param with marks=(skip_post_blackwell, pytest.mark.skip_less_device_memory(80000)) precisely targets GB200 while keeping existing memory gating. Matches PR intent without touching logic.


2429-2431: LGTM: 2-GPU multimodal case correctly waived on GB200

Param-level skip_post_blackwell is appropriate here; function-level device/memory marks remain intact. No behavioral changes beyond the intended waiver.


2532-2534: LGTM: Multiturn multimodal case correctly waived on GB200

The targeted skip for gemma-3-27b-it aligns with the temporary waiver; keeps the rest of the suite unaffected.

@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 15, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15454 [ run ] triggered by Bot

@brb-nv brb-nv force-pushed the user/brb/waive-gb200-tests branch from 5083f8c to 46c82ab Compare August 15, 2025 17:10
@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 15, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15465 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15454 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15465 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #148 completed with status: 'FAILURE'

@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 15, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15485 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15485 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #152 completed with status: 'FAILURE'

@brb-nv brb-nv force-pushed the user/brb/waive-gb200-tests branch from 46c82ab to c8c294a Compare August 15, 2025 23:46
@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 15, 2025

/bot run --extra-stage "H100_PCIe-PyTorch-Post-Merge-1"

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 46c82ab and c8c294a.

📒 Files selected for processing (2)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
  • tests/integration/defs/test_e2e.py (6 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tests/integration/defs/test_e2e.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tests/integration/defs/test_e2e.py
🧠 Learnings (1)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/integration/defs/test_e2e.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (5)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (2)

760-761: LGTM: memory-aware KV cache config for Gemma3

Using free_gpu_memory_fraction=0.5 together with reuse disabled is a sensible WAR given current kernel gaps for Gemma3. No issues.


767-769: LGTM: explicit CUDA graph opt-out and capacity limits

Explicitly disabling CUDA graphs and bounding max_batch_size/max_seq_len for Gemma3 is aligned with the multimodal path constraints (custom image masks via FLASHINFER). Looks good.

tests/integration/defs/test_e2e.py (3)

2372-2374: LGTM: Gemma3-specific CLI WARs (kv_cache_fraction, max_seq_len)

Injecting kv_cache_fraction=0.5 and max_seq_len=1024 for Gemma3 in the quickstart multimodal path is consistent with the FLASHINFER/custom-mask constraints and helps avoid OOM. Good change.


2609-2611: LGTM: Gemma3-specific CLI WARs for 2-GPU path

Same rationale as single-GPU: kv_cache_fraction=0.5 and max_seq_len=1024 are appropriate for Gemma3 with FLASHINFER in 2-GPU runs.


2709-2710: LGTM: Gemma3 multiturn WARs mirrored

Consistent use of kv_cache_fraction=0.5 and max_seq_len=1024 for the multiturn multimodal scenario. Good.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15493 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15493 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #156 completed with status: 'FAILURE'

@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 16, 2025

/bot run --extra-stage "H100_PCIe-PyTorch-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15499 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15499 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #157 completed with status: 'FAILURE'

@brb-nv brb-nv force-pushed the user/brb/waive-gb200-tests branch from c8c294a to 7e5471a Compare August 17, 2025 21:59
@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 17, 2025

/bot run --extra-stage "H100_PCIe-PyTorch-Post-Merge-1"

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
tests/integration/defs/test_e2e.py (3)

2235-2236: GB200 waiver won’t trigger with skip_post_blackwell; use a GB200-specific skip mark

This won’t skip on GB200 (Blackwell). Replace with the GB200-targeted mark to actually waive these tests per PR intent.

Apply this diff:

-                 marks=(skip_post_blackwell,
+                 marks=(skip_device_contain_gb200,
                    pytest.mark.skip_less_device_memory(80000))),

Add the import (outside this hunk) so the mark resolves:

# in: from .conftest import (...)
from .conftest import (..., skip_device_contain_gb200, ...)

Run this to confirm the mark exists and is registered in conftest:

#!/bin/bash
# Locate conftest and verify the GB200-specific mark
fd -t f conftest.py | xargs -I{} rg -n -C3 'skip_device_contain_gb200|skip_device_contain\s*\(\s*\["GB200"\s*\]' {}

2536-2537: Same issue: this won’t skip on GB200

Replace skip_post_blackwell with the GB200-specific mark so GB200 is actually waived.

Apply this diff:

-    pytest.param(
-        "gemma-3-27b-it", "gemma/gemma-3-27b-it", marks=skip_post_blackwell),
+    pytest.param(
+        "gemma-3-27b-it", "gemma/gemma-3-27b-it", marks=skip_device_contain_gb200),

If skip_device_contain_gb200 is not available, fallback to the generic mark if present:

-    marks=skip_post_blackwell
+    marks=pytest.mark.skip_device_contain(["GB200"])

2642-2643: Same issue: GB200 still not waived here

Replace skip_post_blackwell with a GB200-targeted skip.

Apply this diff:

-    pytest.param(
-        "gemma-3-27b-it", "gemma/gemma-3-27b-it", marks=skip_post_blackwell),
+    pytest.param(
+        "gemma-3-27b-it", "gemma/gemma-3-27b-it", marks=skip_device_contain_gb200),
🧹 Nitpick comments (3)
tests/integration/defs/test_e2e.py (3)

2372-2373: Use shared constant for kv_cache_fraction for consistency

Leverage the existing _MEM_FRACTION_50 constant instead of hardcoding 0.5.

Apply this diff:

-        cmd.append("--kv_cache_fraction=0.5")
+        cmd.append(f"--kv_cache_fraction={_MEM_FRACTION_50}")

2610-2611: Minor consistency: reuse _MEM_FRACTION_50 instead of literal 0.5

Keeps memory tuning centralized.

Apply this diff:

-        cmd.append("--kv_cache_fraction=0.5")
+        cmd.append(f"--kv_cache_fraction={_MEM_FRACTION_50}")

2709-2710: Minor consistency: reuse _MEM_FRACTION_50 instead of literal 0.5

Same rationale as above; aligns with other tests in this file.

Apply this diff:

-        cmd.append("--kv_cache_fraction=0.5")
+        cmd.append(f"--kv_cache_fraction={_MEM_FRACTION_50}")
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between c8c294a and 7e5471a.

📒 Files selected for processing (2)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
  • tests/integration/defs/test_e2e.py (6 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/integration/defs/test_e2e.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/integration/defs/test_e2e.py
🧠 Learnings (1)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/integration/defs/test_e2e.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15545 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15636 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15636 [ run ] completed with state FAILURE
/LLM/release-1.0/L0_MergeRequest_PR pipeline #179 completed with status: 'FAILURE'

@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 18, 2025

/bot run --extra-stage "H100_PCIe-PyTorch-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15643 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15643 [ run ] completed with state FAILURE
/LLM/release-1.0/L0_MergeRequest_PR pipeline #181 completed with status: 'FAILURE'

@brb-nv brb-nv force-pushed the user/brb/waive-gb200-tests branch from c248fbf to 789a00d Compare August 18, 2025 17:54
@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 18, 2025

/bot run --extra-stage "H100_PCIe-PyTorch-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15650 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15650 [ run ] completed with state FAILURE
/LLM/release-1.0/L0_MergeRequest_PR pipeline #183 completed with status: 'FAILURE'

@brb-nv brb-nv force-pushed the user/brb/waive-gb200-tests branch from 789a00d to f6be337 Compare August 18, 2025 18:59
@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 18, 2025

/bot run --extra-stage "H100_PCIe-PyTorch-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15655 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15655 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #185 completed with status: 'FAILURE'

@brb-nv
Copy link
Collaborator Author

brb-nv commented Aug 18, 2025

/bot run --extra-stage "H100_PCIe-PyTorch-Post-Merge-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15666 [ run ] triggered by Bot

@brb-nv brb-nv enabled auto-merge (squash) August 19, 2025 02:58
@chzblych
Copy link
Collaborator

/bot run --extra-stage "H100_PCIe-PyTorch-Post-Merge-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15710 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15666 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15710 [ run ] completed with state SUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #202 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@brb-nv brb-nv merged commit da91256 into NVIDIA:release/1.0 Aug 19, 2025
5 checks passed
yuanjingx87 pushed a commit that referenced this pull request Aug 20, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 5, 2025
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 5, 2025
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 6, 2025
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 6, 2025
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Sep 7, 2025
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants