Skip to content

Conversation

namgyu-youn
Copy link
Contributor

@namgyu-youn namgyu-youn commented Jul 31, 2025

Summary:
Converts TorchAO model (Transformer) inference test from Pytest into Unittest. Annotations are added for better readability.

cc @osbm

Copy link

pytorch-bot bot commented Jul 31, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2644

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit cba6d5d with merge base 7b0671c (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 31, 2025
@namgyu-youn namgyu-youn changed the title convert ao inference test from pytest to unittest Convert model inference test from pytest to unittest Jul 31, 2025
Comment on lines 33 to 35
for device in devices:
for batch_size in batch_sizes:
for is_training in training_modes:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can change these to common_utils.parametrize like:

ao/torchao/testing/utils.py

Lines 186 to 187 in 0935f66

@common_utils.parametrize("device", COMMON_DEVICES)
@common_utils.parametrize("dtype", COMMON_DTYPES)
and add a instantiation in the end:
common_utils.instantiate_parametrized_tests(TorchAOBasicTestCase)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love these decorators, thanks!

@namgyu-youn namgyu-youn requested a review from jerryzh168 August 2, 2025 02:23
Comment on lines 13 to 15
_AVAILABLE_DEVICES = ["cpu"] + (["cuda"] if torch.cuda.is_available() else [])
_BATCH_SIZES = [1, 4]
_TRAINING_MODES = [True, False]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: please feel free to inline these

Comment on lines 17 to 19
# Define test parameters
COMMON_DEVICES = common_utils.parametrize("device", _AVAILABLE_DEVICES)
COMMON_DTYPES = common_utils.parametrize("dtype", [torch.float32, torch.bfloat16])
Copy link
Contributor

@jerryzh168 jerryzh168 Aug 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't need to define these here, can just put these in the function itself

@namgyu-youn namgyu-youn requested a review from jerryzh168 August 2, 2025 03:03
Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good, thanks!

from torchao._models.llama.model import Transformer

_AVAILABLE_DEVICES = ["cpu"] + (["cuda"] if torch.cuda.is_available() else [])
from torchao.testing import common_utils
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the import is not correct, please run the tests locally first to make sure the test runs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for my carelessness. I fixed it and tested it locally.

@namgyu-youn namgyu-youn requested a review from jerryzh168 August 2, 2025 10:00
@namgyu-youn
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 7 jobs have failed, first few of them are: Run TorchAO Experimental Tests, Code Analysis with Ruff, PR Label Check, Run Regression Tests, Run 1xH100 Tests

Details for Dev Infra team Raised by workflow job

@namgyu-youn
Copy link
Contributor Author

@pytorchbot merge --ignore-current

Copy link

pytorch-bot bot commented Aug 6, 2025

-i flag is only allowed for users with write permissions

@namgyu-youn
Copy link
Contributor Author

@pytorchbot merge -f "ci appears to be stuck"

Copy link

pytorch-bot bot commented Aug 6, 2025

You are not authorized to force merges to this repository. Please use the regular @pytorchmergebot merge command instead

@jerryzh168
Copy link
Contributor

we will need to use the merge button to merge in torchao

@jerryzh168 jerryzh168 added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label Aug 6, 2025
@namgyu-youn
Copy link
Contributor Author

namgyu-youn commented Aug 7, 2025

we will need to use the merge button to merge in torchao

Like this because there is no "merged" label, thanks. Also, could you check #2660 ? It replaces torch.norm with torch.linalg.vector_norm for PyTorch future update.

@namgyu-youn
Copy link
Contributor Author

@jerryzh168 gentle ping for the reminder :)

@jerryzh168 jerryzh168 merged commit 0347f35 into pytorch:main Aug 15, 2025
20 of 23 checks passed
@namgyu-youn namgyu-youn deleted the pytest_to_unittest_ao_inference branch August 15, 2025 19:14
liangel-02 pushed a commit that referenced this pull request Aug 25, 2025
* convert ao inference test from pytest to unittest

* refactor: `common_utils` for common parameters

* incline common params

* fix uncorrect library import
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing Use this tag if you don't want this PR to show up in release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants