Skip to content

Conversation

@chengzeyi
Copy link
Contributor

@chengzeyi chengzeyi commented Aug 12, 2024

What does this PR do?

Optimize guidance creation in flux pipeline by moving it outside the loop and using torch.full() instead of torch.tensor.
By doing so, we reduce number of the unnecessary implict CUDA synchronizations caused by creating a device tensor from a list.
I observe a little performance gain (1%-2%) by applying this fix.

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@chengzeyi chengzeyi changed the title [Flux] optimize guidance creation in flux pipeline by moving it outside the loop [Flux] Optimize guidance creation in flux pipeline by moving it outside the loop Aug 12, 2024
@a-r-r-o-w a-r-r-o-w requested a review from sayakpaul August 13, 2024 14:46
Comment on lines +680 to +685
# handle guidance
if self.transformer.config.guidance_embeds:
guidance = torch.full([1], guidance_scale, device=device, dtype=torch.float32)
guidance = guidance.expand(latents.shape[0])
else:
guidance = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this!

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, thank you!

@sayakpaul sayakpaul requested a review from DN6 August 13, 2024 14:56
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@Gothos Gothos mentioned this pull request Aug 14, 2024
5 tasks
@DN6 DN6 merged commit e649678 into huggingface:main Aug 16, 2024
sayakpaul added a commit that referenced this pull request Dec 23, 2024
…de the loop (#9153)

* optimize guidance creation in flux pipeline by moving it outside the loop

* use torch.full instead of torch.tensor to create a tensor with a single value

---------

Co-authored-by: Sayak Paul <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants