-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Closed
Description
How do I freeze all the generator weights during training, while still training the discriminator?
I tried setting do_Dmain, do_Gmain, do_Gpl, do_Dr1 = True, False, False, True
within accumulate_gradients()
of loss.py
, but this caused the generator to diverge.
https://github.com/NVlabs/stylegan2-ada-pytorch/blob/main/training/loss.py#L59
Apart from accumulate_gradients()
within loss.py
, what else do I need to change to prevent the generator from continuing to train?
These are the faces produced after resuming training from FFHQ for 40 epochs with the above flags set.
Metadata
Metadata
Assignees
Labels
No labels