Skip to content

Conversation

@dalonsoa
Copy link
Collaborator

@dalonsoa dalonsoa commented Nov 11, 2024

Description

To make everyone's life easier when running and developing MUSE. Currently the tests also produce log files. This might or might not be desirable.

Fixes #498

Type of change

  • New feature (non-breaking change which adds functionality)
  • Optimization (non-breaking, back-end change that speeds up the code)
  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (whatever its nature)

Key checklist

  • All tests pass: $ python -m pytest
  • The documentation builds and looks OK: $ python -m sphinx -b html docs docs/build

Further checks

  • Code is commented, particularly in hard-to-understand areas
  • Tests added that prove fix is effective or that feature works

@dalonsoa dalonsoa requested a review from tsmbland November 11, 2024 14:12
@tsmbland
Copy link
Collaborator

Thanks for this!

A few comments/ideas (happy to open these as separate issues if you don't want to do this here):

  • Is it possible to have two separate files: one for INFO statements and one for WARNING statements? I especially want users to pay more attention to warnings, so I think having these in a separate file would help.
  • I would just call these muse_info.log and muse_warning.log by default (i.e. no date). Otherwise if you run the model many times you're going to get an accumulation of log files (maybe this is what you'll want in some circumstances, but I'd say not by default)
  • I think it probably makes more sense to put these files in the results folder by default (i.e. default_output_dir). This way it's clear which log files match up with which results files, which could otherwise get confusing if you're running multiple models from the same working directory

@dalonsoa
Copy link
Collaborator Author

No problem to do so now. And also to fix the tests, that I had forgotten that we support Python 3.9.

So, just to confirm, we don't give the option to select the log file name(s) just muse_info and muse_warning. I prefer that, simpler and cleaner, but just to be sure.

@tsmbland
Copy link
Collaborator

No problem to do so now. And also to fix the tests, that I had forgotten that we support Python 3.9.

So, just to confirm, we don't give the option to select the log file name(s) just muse_info and muse_warning. I prefer that, simpler and cleaner, but just to be sure.

Cool. Yeah let's just keep it simple and hardcode the file names

@dalonsoa
Copy link
Collaborator Author

Done. I had to do some tweaking since tests were failing when running in parallel because they were all trying to access the same log file. I've disabled logging to file during tests, I think, since I could not find a way of indicating that using fixtures (all my attempts were ignored).

@dalonsoa
Copy link
Collaborator Author

I've no idea why building the docs fail. Certainly, running muse --model default works locally (at least in Windows) so I'm not sure why it might be failing there.

@tsmbland
Copy link
Collaborator

I've no idea why building the docs fail. Certainly, running muse --model default works locally (at least in Windows) so I'm not sure why it might be failing there.

It's because the results folder doesn't exist yet when it tries to create the file (you'll already have a results folder locally from previous runs)

@dalonsoa
Copy link
Collaborator Author

@tsmbland , all good, now! Many thanks for the tip.

Copy link
Collaborator

@tsmbland tsmbland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, all looks good! Thanks very much

@tsmbland tsmbland merged commit e7ad10e into main Nov 12, 2024
14 checks passed
@tsmbland tsmbland deleted the log_file branch November 12, 2024 10:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

Output warnings to a file

3 participants