Skip to content

Conversation

@brb-nv
Copy link
Collaborator

@brb-nv brb-nv commented May 28, 2025

Description

This addresses https://nvbugspro.nvidia.com/bug/5301221.

Context
We use a trainer (transformers.Trainer) while creating dummy Medusa and Eagle heads. During the trainer's init, looks like torch dist training is being called and it expects env variables such as WORLD_SIZE, MASTER_PORT, MASTER_ADDR, RANK.

To deal with a peculiar issue in transformers library, we unset WORLD_SIZE while running tests in specific cluster nodes causing this test to fail.

Proposed solution
Set WORLD_SIZE env variable temporarily at trainer init if env variable is missing.

Alternative
Skip the test when WORLD_SIZE is not set.

Test Coverage

$ pytest tests/integration/defs/examples/test_eagle.py::test_mistral_nemo_eagle_1gpu[Mistral-Nemo-12b-Base-eagle1] -s -v

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@brb-nv brb-nv requested a review from a team as a code owner May 28, 2025 21:27
@brb-nv brb-nv self-assigned this May 28, 2025
@brb-nv brb-nv requested a review from xinhe-nv May 28, 2025 21:27
@brb-nv brb-nv force-pushed the user/brb/skip-torch-dist-init branch from 4b36b7f to ea34d29 Compare May 28, 2025 21:28
@brb-nv
Copy link
Collaborator Author

brb-nv commented May 28, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6808 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6808 [ run ] completed with state FAILURE
/LLM/release-0.20/L0_MergeRequest_PR pipeline #109 completed with status: 'FAILURE'

@brb-nv brb-nv force-pushed the user/brb/skip-torch-dist-init branch from ea34d29 to b1e51af Compare May 29, 2025 02:51
@brb-nv
Copy link
Collaborator Author

brb-nv commented May 29, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6849 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6849 [ run ] completed with state SUCCESS
/LLM/release-0.20/L0_MergeRequest_PR pipeline #114 completed with status: 'FAILURE'

@brb-nv brb-nv force-pushed the user/brb/skip-torch-dist-init branch 2 times, most recently from 7c8651e to 205529b Compare May 29, 2025 06:27
@brb-nv
Copy link
Collaborator Author

brb-nv commented May 29, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6894 [ run ] triggered by Bot

@brb-nv
Copy link
Collaborator Author

brb-nv commented May 29, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6897 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6894 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6897 [ run ] completed with state SUCCESS
/LLM/release-0.20/L0_MergeRequest_PR pipeline #123 completed with status: 'FAILURE'

@brb-nv brb-nv force-pushed the user/brb/skip-torch-dist-init branch from 3af0ecf to 8fadfec Compare May 30, 2025 02:34
@brb-nv brb-nv changed the title fix: Skip torch distributed training for dummy heads creation fix: Set missing env variable WORLD_SIZE temporarily for trainer init May 30, 2025
@brb-nv brb-nv force-pushed the user/brb/skip-torch-dist-init branch from 8fadfec to a463b3d Compare May 30, 2025 02:46
@brb-nv
Copy link
Collaborator Author

brb-nv commented May 30, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #7001 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #7001 [ run ] completed with state SUCCESS
/LLM/release-0.20/L0_MergeRequest_PR pipeline #131 completed with status: 'FAILURE'

@brb-nv brb-nv closed this May 30, 2025
@brb-nv brb-nv deleted the user/brb/skip-torch-dist-init branch July 11, 2025 23:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants