Skip to content

Conversation

@dependabot
Copy link

@dependabot dependabot bot commented on behalf of github Feb 15, 2021

Bumps pytorch-lightning from 1.0.3 to 1.1.8.

Release notes

Sourced from pytorch-lightning's releases.

Standard weekly patch release

[1.1.8] - 2021-02-08

Fixed

  • Separate epoch validation from step validation (#5208)
  • Fixed toggle_optimizers not handling all optimizer parameters (#5775)

Contributors

@ananthsub, @rohitgr7

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.1.7] - 2021-02-03

Fixed

  • Fixed TensorBoardLogger not closing SummaryWriter on finalize (#5696)
  • Fixed filtering of pytorch "unsqueeze" warning when using DP (#5622)
  • Fixed num_classes argument in F1 metric (#5663)
  • Fixed log_dir property (#5537)
  • Fixed a race condition in ModelCheckpoint when checking if a checkpoint file exists (#5144)
  • Remove unnecessary intermediate layers in Dockerfiles (#5697)
  • Fixed auto learning rate ordering (#5638)

Contributors

@awaelchli @guillochon @noamzilo @rohitgr7 @SkafteNicki @sumanthratna

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.1.6] - 2021-01-26

Changed

  • Increased TPU check timeout from 20s to 100s (#5598)
  • Ignored step param in Neptune logger's log_metric method (#5510)
  • Pass batch outputs to on_train_batch_end instead of epoch_end outputs (#4369)

Fixed

  • Fixed toggle_optimizer to reset requires_grad state (#5574)
  • Fixed FileNotFoundError for best checkpoint when using DDP with Hydra (#5629)
  • Fixed an error when logging a progress bar metric with a reserved name (#5620)
  • Fixed Metric's state_dict not included when child modules (#5614)
  • Fixed Neptune logger creating multiple experiments when GPUs > 1 (#3256)
  • Fixed duplicate logs appearing in console when using the python logging module (#5509)
  • Fixed tensor printing in trainer.test() (#5138)
  • Fixed not using dataloader when hparams present (#4559)

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.1.8] - 2021-02-08

Fixed

  • Separate epoch validation from step validation (#5208)
  • Fixed toggle_optimizers not handling all optimizer parameters (#5775)

[1.1.7] - 2021-02-03

Fixed

  • Fixed TensorBoardLogger not closing SummaryWriter on finalize (#5696)
  • Fixed filtering of pytorch "unsqueeze" warning when using DP (#5622)
  • Fixed num_classes argument in F1 metric (#5663)
  • Fixed log_dir property (#5537)
  • Fixed a race condition in ModelCheckpoint when checking if a checkpoint file exists (#5144)
  • Remove unnecessary intermediate layers in Dockerfiles (#5697)
  • Fixed auto learning rate ordering (#5638)

[1.1.6] - 2021-01-26

Changed

  • Increased TPU check timeout from 20s to 100s (#5598)
  • Ignored step param in Neptune logger's log_metric method (#5510)
  • Pass batch outputs to on_train_batch_end instead of epoch_end outputs (#4369)

Fixed

  • Fixed toggle_optimizer to reset requires_grad state (#5574)
  • Fixed FileNotFoundError for best checkpoint when using DDP with Hydra (#5629)
  • Fixed an error when logging a progress bar metric with a reserved name (#5620)
  • Fixed Metric's state_dict not included when child modules (#5614)
  • Fixed Neptune logger creating multiple experiments when GPUs > 1 (#3256)
  • Fixed duplicate logs appearing in console when using the python logging module (#5509)
  • Fixed tensor printing in trainer.test() (#5138)
  • Fixed not using dataloader when hparams present (#4559)

[1.1.5] - 2021-01-19

Fixed

  • Fixed a visual bug in the progress bar display initialization (#4579)
  • Fixed logging on_train_batch_end in a callback with multiple optimizers (#5521)
  • Fixed reinit_scheduler_properties with correct optimizer (#5519)
  • Fixed val_check_interval with fast_dev_run (#5540)

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Feb 15, 2021
@dependabot dependabot bot force-pushed the dependabot/pip/python/requirements/pytorch-lightning-1.1.8 branch from f29f2d1 to 1b58b1b Compare February 18, 2021 19:05
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Feb 20, 2021

Superseded by #20.

@dependabot dependabot bot closed this Feb 20, 2021
@dependabot dependabot bot deleted the dependabot/pip/python/requirements/pytorch-lightning-1.1.8 branch February 20, 2021 08:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant