Skip to content

Conversation

orionarcher
Copy link
Collaborator

@orionarcher orionarcher commented Apr 14, 2025

Summary

Update changelog for new release.

Checklist

Work-in-progress pull requests are encouraged, but please enable the draft status on your PR.

Before a pull request can be merged, the following items must be checked:

  • Doc strings have been added in the Google docstring format.
    Run ruff on your code.
  • Tests have been added for any new functionality or bug fixes.
  • All linting and tests pass.

Note that the CI system will run all the above checks. But it will be much more
efficient if you already fix most errors prior to submitting the PR. It is highly
recommended that you use the pre-commit hook provided in the repository. Simply run
pre-commit install and a check will be run prior to allowing commits.

@cla-bot cla-bot bot added the cla-signed Contributor license agreement signed label Apr 14, 2025
@orionarcher orionarcher merged commit f957668 into main Apr 14, 2025
15 of 52 checks passed
@orionarcher orionarcher deleted the changelog_v0.2.0 branch April 14, 2025 19:03
orionarcher added a commit that referenced this pull request Apr 14, 2025
* update changelog for v0.2.0

* minor modification for PR template

* formatting fixes

* formatting and typos

* remove contributors bc they aren't linked
abhijeetgangan added a commit that referenced this pull request Apr 15, 2025
* Add per atom energies and stresses for batched LJ

* update changelog for v0.2.0 (#147)

* update changelog for v0.2.0

* minor modification for PR template

* formatting fixes

* formatting and typos

* remove contributors bc they aren't linked

* Add tests

* simplify results

---------

Signed-off-by: Abhijeet Gangan <[email protected]>
Co-authored-by: Orion Cohen <[email protected]>
CompRhys pushed a commit that referenced this pull request May 2, 2025
* update changelog for v0.2.0

* minor modification for PR template

* formatting fixes

* formatting and typos

* remove contributors bc they aren't linked
janosh added a commit that referenced this pull request May 14, 2025
* adds comparative test for fire optimizer with ase

* rm a few comments

* updates CI for the test

* update changelog for v0.2.0 (#147)

* update changelog for v0.2.0

* minor modification for PR template

* formatting fixes

* formatting and typos

* remove contributors bc they aren't linked

* update test with a harder system

* fix torch.device not iterable error in test_torchsim_vs_ase_fire_mace.py

* fix: should compare the row_vector cell, clean: fix changelog typo

* clean: delete .coverage and newline for pytest command

* Introduce ASE-style `FIRE` optimizer (departing from velocity Verlet in orig FIRE paper) and improve coverage in `test_optimizers.py` (#174)

* feat(fire-optimizer-changes) Update fire_step in optimizers.py based feature/neb-workflow

* reset optimizers.py to main version prior to adding updated changes

* (feat:fire-optimizer-changes) - Added ase_fire_step and renamed fire_step to vv_fire_step. Allowed for selection of md_flavor

* (feat:fire-optimizer-changes) - lint check on optimizers.py with ruff

* (feat:fire-optimizer-changes) - added test cases and example script in examples/scripts/7_Others/7.6_Compare_ASE_to_VV_FIRE.py

* (feat:fire-optimizer-changes) - updated FireState, UnitCellFireState, and FrechetCellFireState to have md_flavor to select vv or ase. ASE currently coverges in 1/3 as long. test cases for all three FIRE schemes added to test_optimizers.py with both md_flavors

* ruff auto format

* minor refactor of 7.6_Compare_ASE_to_VV_FIRE.py

* refactor optimizers.py: define MdFlavor type alias for SSoT on MD flavors

* new optimizer tests: FIRE and UnitCellFIRE initialization with dictionary states, md_flavor validation, non-positive volume warnings

brings optimizers.py test coverage up to 96%

* cleanup test_optimizers.py: parameterize tests for FIRE and UnitCellFIRE initialization and batch consistency checks

maintains same 96% coverage

* refactor optimizers.py: consolidate vv_fire_step logic into a single _vv_fire_step function modified by functools.partial for different unit cell optimizations (unit/frechet/bare fire=no cell relax)

- more concise and maintainable code

* same as prev commit but for _ase_fire_step

instead of _vv_fire_step

* (feat:fire-optimizer-changes) - added references to ASE implementation of FIRE and a link to the original FIRE paper.

* (feat:fire-optimizer-changes) switched md_flavor type from str to MdFlavor and set default to ase_fire_step

* pytest.mark.xfail frechet_cell_fire with ase_fire flavor, reason: shows asymmetry in batched mode, batch 0 stalls

* rename maxstep to max_step for consistent snake_case

fix RuntimeError: a leaf Variable that requires grad is being used in an in-place operation:

7. Position / cell update
state.positions += dr_atom

* unskip frechet_cell_fire in test_optimizer_batch_consistency, can no longer repro error locally

* code cleanup

* bumpy set-up action to v6, more descriptive CI test names

* pin to fairchem_core-1.10.0 in CI

* explain differences between vv_fire and ase_fire and link references in fire|unit_cell_fire|frechet_cell_fire doc strings

* merge test_torchsim_frechet_cell_fire_vs_ase_mace.py with comparative ASE vs torch-sim test for Frechet Cell FIRE optimizer into test_optimizers.py

- move `ase_mace_mpa` and `torchsim_mace_mpa` fixtures into `conftest.py` for wider reuse

* redirect MACE_CHECKPOINT_URL to mace_agnesi_small for faster tests

* on 2nd thought, keep test_torchsim_frechet_cell_fire_vs_ase_mace in a separate file (thanks @CompRhys)

* define MaceUrls StrEnum to avoid breaking tests when "small" checkpoints get redirected in mace-torch

---------

Co-authored-by: Orion Cohen <[email protected]>
Co-authored-by: Janosh Riebesell <[email protected]>
Co-authored-by: Rhys Goodall <[email protected]>
Co-authored-by: Myles Stapelberg <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla-signed Contributor license agreement signed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant