Skip to content

Conversation

@Superjomn
Copy link
Collaborator

@Superjomn Superjomn commented Aug 12, 2025

Summary by CodeRabbit

  • Tests
    • Added coverage for very-long prompts in completions to validate handling of extreme inputs.
    • Introduced a regression test enforcing maximum token limits, raising clear errors when prompts exceed configured thresholds.
  • Bug Fixes
    • Forward existing error responses unchanged across process boundaries to avoid duplicate or malformed error results.

These changes improve robustness and reliability for edge-case and error scenarios.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 12, 2025

📝 Walkthrough

Walkthrough

Adds two unit tests: an OpenAI-style long-prompt invocation (no assertions) and a PyTorch regression asserting a ValueError when prompt tokens exceed max_num_tokens; and updates the TensorRT LLM IPC worker to forward existing ErrorResponse objects unchanged.

Changes

Cohort / File(s) Summary of changes
OpenAI Completions Long Prompt Test
tests/unittest/llmapi/apps/_test_openai_completions.py
Adds test_single_completion_with_too_long_prompt that builds a long prompt (repeats "Hello, my name is" 100×), calls completions.create with max_tokens=5 and temperature=0.0, and prints the completion; contains no assertions.
PyTorch LLM max_num_tokens Enforcement
tests/unittest/llmapi/test_llm_pytorch.py
Adds TestLlmError.test_max_num_token_check which configures an LLM with max_num_tokens=100, constructs a 101-token prompt (uses random.randint to build token ids) and asserts a ValueError with message matching "should not exceed max_num_tokens".
TensorRT LLM IPC worker error passthrough
tensorrt_llm/executor/worker.py
In handle_for_ipc_batched, short-circuits items that are already ErrorResponse by forwarding them unchanged; non-ErrorResponse items retain existing background-error and has_error handling and may be converted/wrapped as before.

Sequence Diagram(s)

sequenceDiagram
    participant Creator
    participant Worker
    participant IPC

    Creator->>Worker: produce response item (ErrorResponse or normal)
    Worker->>Worker: is item an ErrorResponse?
    alt ErrorResponse
        Worker->>IPC: forward ErrorResponse unchanged
    else Not ErrorResponse
        Worker->>Worker: if background_error -> _create_error_response
        Worker->>Worker: else if response.has_error -> wrap into ErrorResponse
        Worker->>Worker: else -> _maybe_wrap_response
        Worker->>IPC: forward wrapped/normal response
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~15 minutes

Possibly related PRs

Suggested reviewers

  • kaiyux
  • chzblych
  • syuoni
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@Superjomn
Copy link
Collaborator Author

/bot run

@Superjomn Superjomn force-pushed the _trtllm.fix-trtllm-serve-bad-state branch from 2c22d7b to a844ef7 Compare August 12, 2025 09:43
@Superjomn Superjomn requested a review from a team as a code owner August 12, 2025 09:43
@Superjomn Superjomn requested a review from syuoni August 12, 2025 09:43
@Superjomn
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14943 [ run ] triggered by Bot

@Superjomn
Copy link
Collaborator Author

/bot kill

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
tests/unittest/llmapi/test_llm_pytorch.py (1)

814-824: Fix method signature (missing self), ensure deterministic inputs, and manage LLM lifetime

  • Pytest collects instance methods; missing self will raise a TypeError.
  • Avoid randomness in tests; use deterministic token IDs.
  • Prefer using the LLM context manager to ensure cleanup.

Apply this diff:

-class TestLlmError:
-
-    def test_max_num_token_check():
-        """ LLM should raise error when got prompt length exceed the valid range. """
-        llm = LLM(llama_model_path,
-                  kv_cache_config=global_kvcache_config,
-                  max_num_tokens=100)
-
-        with pytest.raises(ValueError,
-                           match="should not exceed max_num_tokens"):
-            ids = [random.randint(10, 100) for _ in range(101)]
-            llm.generate([ids])
+class TestLlmError:
+
+    def test_max_num_token_check(self):
+        """LLM should raise error when prompt length exceeds max_num_tokens."""
+        llm = LLM(model=llama_model_path,
+                  kv_cache_config=global_kvcache_config,
+                  max_num_tokens=100)
+        ids = [42] * 101  # deterministic, valid token IDs
+        with llm:
+            with pytest.raises(ValueError, match="should not exceed max_num_tokens"):
+                llm.generate([ids])
🧹 Nitpick comments (1)
tests/unittest/llmapi/test_llm_pytorch.py (1)

1-1: Remove unnecessary randomness import after making the test deterministic

Once the test uses fixed token IDs, this import becomes unused.

Apply this diff:

-import random
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8845e0f and 2c22d7b.

📒 Files selected for processing (2)
  • tests/unittest/llmapi/apps/_test_openai_completions.py (1 hunks)
  • tests/unittest/llmapi/test_llm_pytorch.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL).
Python constants should use upper snake_case (e.g., MY_CONSTANT).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the class docstring.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.

Files:

  • tests/unittest/llmapi/apps/_test_openai_completions.py
  • tests/unittest/llmapi/test_llm_pytorch.py
**/*.{cpp,h,hpp,cc,cxx,cu,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

Files:

  • tests/unittest/llmapi/apps/_test_openai_completions.py
  • tests/unittest/llmapi/test_llm_pytorch.py
🔇 Additional comments (1)
tests/unittest/llmapi/apps/_test_openai_completions.py (1)

83-94: Convert to deterministic assertion-based test for over-limit prompts
This test currently just prints the response and makes no assertions, so it will pass even if the prompt never exceeds the actual context limit. Instead, we should:

  • Remove the print call.
  • Force a small context window via extra_body={"max_num_tokens": 100}.
  • Use a list of token IDs (e.g. [0] * 101) to deterministically exceed that limit.
  • Assert that the request is rejected with a 400 BadRequestError (matching an “exceed”/“max_num_tokens”/“context” message).
  • If the API actually returns a truncated completion (e.g. finish_reason == "length"), adjust the assertion accordingly.

Proposed diff:

 def test_single_completion_with_too_long_prompt(client: openai.OpenAI,
                                                 model_name):
-    completion = client.completions.create(
-        model=model_name,
-        prompt="Hello, my name is" * 100,
-        max_tokens=5,
-        temperature=0.0,
-    )
-
-    print(completion)
+    # Exceed a small configured context window to force a rejection
+    too_long_prompt = [0] * 101
+    with pytest.raises(openai.BadRequestError, match="(exceed|max_num_tokens|context)"):
+        client.completions.create(
+            model=model_name,
+            prompt=too_long_prompt,
+            max_tokens=5,
+            temperature=0.0,
+            extra_body={"max_num_tokens": 100},
+        )

Please verify whether the OpenAI completions endpoint returns a 400 BadRequest for over-limit prompts. If it instead returns a truncated response (finish_reason == "length"), update the assertion to match that behavior.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14944 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14943 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14945 [ kill ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14944 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14945 [ kill ] completed with state SUCCESS
Successfully killed previous jobs for commit a844ef7

@Superjomn Superjomn force-pushed the _trtllm.fix-trtllm-serve-bad-state branch 2 times, most recently from 96e52ed to a00e69d Compare August 12, 2025 12:13
@Superjomn
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14968 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14968 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11300 completed with status: 'FAILURE'

Signed-off-by: Superjomn <[email protected]>
@Superjomn Superjomn force-pushed the _trtllm.fix-trtllm-serve-bad-state branch from a00e69d to 9aa753b Compare August 13, 2025 04:31
@Superjomn
Copy link
Collaborator Author

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
tensorrt_llm/executor/worker.py (4)

839-845: Broaden the type hint to reflect actual payload (responses may include ErrorResponse)

responses can include ErrorResponse (from temp_error_responses). Update the signature to avoid type-checker false positives and improve readability.

Apply this diff:

-def responses_handler(self, responses: List[tllm.Response]):
+def responses_handler(self, responses: List[Union[tllm.Response, ErrorResponse]]):

916-918: Same type-hint fix for IPC handler

This handler also receives ErrorResponse objects now. Align the signature accordingly.

Apply this diff:

-def handle_for_ipc_batched(self, responses: List[tllm.Response]) -> None:
+def handle_for_ipc_batched(self, responses: List[Union[tllm.Response, ErrorResponse]]) -> None:

1000-1005: Fix rsp_batch typing to match actual content (ResponseWrapper and ErrorResponse are appended)

rsp_batch may contain ResponseWrapper and ErrorResponse in addition to tllm.Response. Update the annotation to prevent confusion and static analysis warnings.

Apply this diff:

-def _send_rsp(
+def _send_rsp(
         worker,
-        response: Union[tllm.Response, ResponseWrapper, ErrorResponse],
-        postproc_batches: Optional[List[List["PostprocWorker.Input"]]] = None,
-        rsp_batch: Optional[List[tllm.Response]] = None):
+        response: Union[tllm.Response, ResponseWrapper, ErrorResponse],
+        postproc_batches: Optional[List[List["PostprocWorker.Input"]]] = None,
+        rsp_batch: Optional[List[Union[tllm.Response, ResponseWrapper, ErrorResponse]]] = None):

916-923: Nit: Clarify docstring to reflect mixed response types

The docstring for handle_for_ipc_batched currently doesn't mention that responses can include ErrorResponse and ResponseWrapper. A short note helps future readers.

Would you like me to draft a concise docstring update that explicitly calls out accepted response types and the no-wrap behavior for ErrorResponse?

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a00e69d and 9aa753b.

📒 Files selected for processing (3)
  • tensorrt_llm/executor/worker.py (1 hunks)
  • tests/unittest/llmapi/apps/_test_openai_completions.py (1 hunks)
  • tests/unittest/llmapi/test_llm_pytorch.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/unittest/llmapi/test_llm_pytorch.py
  • tests/unittest/llmapi/apps/_test_openai_completions.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tensorrt_llm/executor/worker.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/executor/worker.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tensorrt_llm/executor/worker.py (1)

926-937: Robust handling for prebuilt ErrorResponse (prevents AttributeError in IPC path) — LGTM

Adding the early isinstance(ErrorResponse) guard ensures we don't call Response.has_error() on an ErrorResponse coming from temp_error_responses (e.g., submit failures). This fixes a real crash in the IPC-batched path and preserves the original error payload. Looks good.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15069 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15069 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11379 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@Superjomn Superjomn enabled auto-merge (squash) August 14, 2025 05:30
@Superjomn Superjomn merged commit 0132c1d into NVIDIA:main Aug 14, 2025
4 checks passed
@Superjomn Superjomn deleted the _trtllm.fix-trtllm-serve-bad-state branch September 15, 2025 08:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants