-
Notifications
You must be signed in to change notification settings - Fork 6.4k
fix scale_shift_factor being on cpu for wan and ltx #12347
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@vladmandic Do you have a snippet to reproduce the error? |
I dont have easy reproduction as it happens relatively randomly when offloading is enabled and on low-memory systems. |
update: same issue occurs for some of my users for ltxvideo so i've extended the pr with the same fix in that pipeline. |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
gentle nudge - any particular reason why this pr is not merged? |
Hi @vladmandic sorry for the delay. It's just that since the introduction of hooks for offloading and casting, we're trying to avoid direct assignment of device, dtype in the models in case of unexpected behaviour, which is why I asked for the repro to check if there's something else going on. Could you just share the type of offloading you're using? Is it group offloading? |
@DN6 i understand the general desire, however:
if you don't want to merge this pr, then please propose an alternative as right now my users are blocked. |
@bot /style |
Style bot fixed some files and pushed the changes. |
@vladmandic Parameters in the module are subject to offloading hooks diffusers/src/diffusers/hooks/group_offloading.py Lines 628 to 629 in 941ac9c
The direct casts in Pixart and SANA were added before offloading hooks were introduced and no other instances use this type of casting (it shouldn't be necessary) Based on your description it seems to be an issue with custom offloading logic applied in non-CUDA environments. This really feels like an edge case to me and the issue doesn't seem related to diffusers code. However, I've run the offloading tests on the models and they're passing so I'm okay to merge this, but FYI, there is a chance we remove direct casts from models in the future if we see any issues. I'll will give you a heads up if that's the case to avoid breaking anything downstream. |
thanks @DN6
same custom offloading logic is applied everywhere in sdnext, not specific to rocm/zluda, its just that this problem occurs only for those torch versions. |
wan transformer block creates
scale_shift_table
on cpu and then adds it regardless of wheretemb
tensor actually residesand this causes typical cpu-vs-cuda device mismatch
cc @sayakpaul @yiyixuxu @a-r-r-o-w @DN6