-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[AutoDeploy] HF factory improvements #4371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR improves the HF model factory by renaming the model factory string, refining weight loading, and enhancing tokenizer customization. Key changes include updating method signatures to use a device parameter instead of keyword arguments, renaming the factory to "AutoModelForCausalLM", and patching configuration updates to recursively apply model_kwargs.
Reviewed Changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 1 comment.
Show a summary per file
File | Description |
---|---|
tensorrt_llm/_torch/auto_deploy/transformations/transform.py | Updates the load_or_random_init call to pass device instead of kwargs. |
tensorrt_llm/_torch/auto_deploy/shim/interface.py | Updates model_factory default from "hf" to "AutoModelForCausalLM". |
tensorrt_llm/_torch/auto_deploy/shim/demollm.py | Passes None instead of model to create_input_processor; behavior should be verified. |
tensorrt_llm/_torch/auto_deploy/models/hf.py | Adds a recursive config update and adjusts checkpoint loading to use the device parameter. |
tensorrt_llm/_torch/auto_deploy/models/factory.py | Modifies load_or_random_init and _load_random_init signatures to require a device. |
examples/auto_deploy/simple_config.py | Switches model_factory type to a Literal for clarity. |
examples/auto_deploy/README.md | Reflects the updated model factory naming in the supported models table. |
examples/auto_deploy/.vscode/settings.json & launch.json | Updates testing configuration to align with the new factory naming and environment paths. |
Comments suppressed due to low confidence (2)
tensorrt_llm/_torch/auto_deploy/shim/demollm.py:375
- Passing None in place of a model instance to create_input_processor may lead to unintended behavior if the processor expects a valid model. Please verify that this change is intentional and that create_input_processor can handle a None value.
self.input_processor = create_input_processor(None, self.tokenizer)
tensorrt_llm/_torch/auto_deploy/models/factory.py:87
- Ensure that the updated signature using the device parameter is propagated to all calling contexts and that the corresponding documentation is updated for clarity.
def load_or_random_init(self, model: nn.Module, device: DeviceLikeType):
/bot run |
1 similar comment
/bot run |
PR_Github #5413 [ run ] triggered by Bot |
PR_Github #5415 [ run ] triggered by Bot |
PR_Github #5413 [ run ] completed with state |
PR_Github #5415 [ run ] completed with state |
26e57e8
to
1270cc3
Compare
/bot run --disable-fail-fast --extra-stage "DGX_H100-4_GPUs-PyTorch-[Post-Merge]" |
PR_Github #5522 [ run ] triggered by Bot |
1270cc3
to
6ecb957
Compare
/bot run |
PR_Github #5535 [ run ] triggered by Bot |
PR_Github #5522 [ run ] completed with state |
/bot run --disable-fail-fast --extra-stage "DGX_H100-4_GPUs-PyTorch-[Post-Merge]" |
PR_Github #5542 [ run ] triggered by Bot |
PR_Github #5535 [ run ] completed with state |
PR_Github #5542 [ run ] completed with state |
Signed-off-by: Lucas Liebenwein <[email protected]>
Signed-off-by: Lucas Liebenwein <[email protected]>
6ecb957
to
e742242
Compare
/bot run |
PR_Github #5760 [ run ] triggered by Bot |
PR_Github #5760 [ run ] completed with state |
* [AutoDeploy] HF factory improvements Signed-off-by: Lucas Liebenwein <[email protected]> * improve monkey-patches and add unit tests Signed-off-by: Lucas Liebenwein <[email protected]> --------- Signed-off-by: Lucas Liebenwein <[email protected]>
* [AutoDeploy] HF factory improvements * improve monkey-patches and add unit tests --------- Signed-off-by: Lucas Liebenwein <[email protected]>
This PR introduces several improvements to our HF model factory:
AutoModelForCausalLM
to reflect there are separate factories for different auto model typesGitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]
Launch build/test pipelines. All previously running jobs will be killed.
--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-[Post-Merge]-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.