Skip to content

Conversation

lucaslie
Copy link
Member

@lucaslie lucaslie commented May 15, 2025

This PR introduces several improvements to our HF model factory:

  • Change name to AutoModelForCausalLM to reflect there are separate factories for different auto model types
  • Better loading reusing HF utilities
  • Ability to correctly customize tokenizer from native HF if desired

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@lucaslie lucaslie self-assigned this May 15, 2025
@lucaslie lucaslie moved this from Backlog to In review in AutoDeploy Board May 15, 2025
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR improves the HF model factory by renaming the model factory string, refining weight loading, and enhancing tokenizer customization. Key changes include updating method signatures to use a device parameter instead of keyword arguments, renaming the factory to "AutoModelForCausalLM", and patching configuration updates to recursively apply model_kwargs.

Reviewed Changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
tensorrt_llm/_torch/auto_deploy/transformations/transform.py Updates the load_or_random_init call to pass device instead of kwargs.
tensorrt_llm/_torch/auto_deploy/shim/interface.py Updates model_factory default from "hf" to "AutoModelForCausalLM".
tensorrt_llm/_torch/auto_deploy/shim/demollm.py Passes None instead of model to create_input_processor; behavior should be verified.
tensorrt_llm/_torch/auto_deploy/models/hf.py Adds a recursive config update and adjusts checkpoint loading to use the device parameter.
tensorrt_llm/_torch/auto_deploy/models/factory.py Modifies load_or_random_init and _load_random_init signatures to require a device.
examples/auto_deploy/simple_config.py Switches model_factory type to a Literal for clarity.
examples/auto_deploy/README.md Reflects the updated model factory naming in the supported models table.
examples/auto_deploy/.vscode/settings.json & launch.json Updates testing configuration to align with the new factory naming and environment paths.
Comments suppressed due to low confidence (2)

tensorrt_llm/_torch/auto_deploy/shim/demollm.py:375

  • Passing None in place of a model instance to create_input_processor may lead to unintended behavior if the processor expects a valid model. Please verify that this change is intentional and that create_input_processor can handle a None value.
self.input_processor = create_input_processor(None, self.tokenizer)

tensorrt_llm/_torch/auto_deploy/models/factory.py:87

  • Ensure that the updated signature using the device parameter is propagated to all calling contexts and that the corresponding documentation is updated for clarity.
def load_or_random_init(self, model: nn.Module, device: DeviceLikeType):

@lucaslie
Copy link
Member Author

/bot run

1 similar comment
@lucaslie
Copy link
Member Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5413 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5415 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5413 [ run ] completed with state ABORTED

@lucaslie lucaslie added the AutoDeploy <NV> AutoDeploy Backend label May 15, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #5415 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #3952 completed with status: 'FAILURE'

@lucaslie lucaslie force-pushed the ll/hf_factory_v2 branch from 26e57e8 to 1270cc3 Compare May 16, 2025 15:10
@lucaslie
Copy link
Member Author

/bot run --disable-fail-fast --extra-stage "DGX_H100-4_GPUs-PyTorch-[Post-Merge]"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5522 [ run ] triggered by Bot

@lucaslie lucaslie enabled auto-merge (squash) May 16, 2025 15:38
@lucaslie lucaslie force-pushed the ll/hf_factory_v2 branch from 1270cc3 to 6ecb957 Compare May 16, 2025 19:43
@lucaslie
Copy link
Member Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5535 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5522 [ run ] completed with state ABORTED

@lucaslie
Copy link
Member Author

/bot run --disable-fail-fast --extra-stage "DGX_H100-4_GPUs-PyTorch-[Post-Merge]"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5542 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5535 [ run ] completed with state ABORTED
/LLM/main/L0_MergeRequest_PR pipeline #4036 completed with status: 'FAILURE'

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5542 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #4043 completed with status: 'FAILURE'

@lucaslie lucaslie force-pushed the ll/hf_factory_v2 branch from 6ecb957 to e742242 Compare May 19, 2025 17:47
@lucaslie
Copy link
Member Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #5760 [ run ] triggered by Bot

@lucaslie lucaslie disabled auto-merge May 19, 2025 23:17
@tensorrt-cicd
Copy link
Collaborator

PR_Github #5760 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #4214 completed with status: 'SUCCESS'

@lucaslie lucaslie merged commit de409e8 into NVIDIA:main May 20, 2025
3 checks passed
@github-project-automation github-project-automation bot moved this from In review to Done in AutoDeploy Board May 20, 2025
lucaslie added a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request May 20, 2025
* [AutoDeploy] HF factory improvements

Signed-off-by: Lucas Liebenwein <[email protected]>

* improve monkey-patches and add unit tests

Signed-off-by: Lucas Liebenwein <[email protected]>

---------

Signed-off-by: Lucas Liebenwein <[email protected]>
lucaslie added a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request May 20, 2025
* [AutoDeploy] HF factory improvements



* improve monkey-patches and add unit tests



---------

Signed-off-by: Lucas Liebenwein <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AutoDeploy <NV> AutoDeploy Backend
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants