Skip to content

Conversation

@tsmbland
Copy link
Collaborator

@tsmbland tsmbland commented Nov 4, 2024

Description

The purpose of this PR is to allow different sectors to have different levels of timeslice granularity. This could be useful if you only require a sector to meet demands on a monthly or daily basis rather than in every timeslice. For example, oil extraction doesn't have to meet oil demand at the hourly level as oil can be stored, but will have to balance out demands in the long term. On the other hand, electricity supply has to match demand at a much finer level. It's always been possible to do this (at least in theory), although the previous implementation was clunky for both users and developers, and was removed in #519 when the timeslices module had a big refactor. I think the implementation here is much cleaner and more user-friendly, and more in line with the plans for MUSE2 (where each commodity will be given a timeslice level)

Users can now specify a timeslice level for each sector using the timeslice_level argument in the settings file. This is the level of timeslice granularity over which commodity flows out of the sector are balanced with demand. If no timeslice level is specified for a sector, it defaults to the finest level in the global timeslices scheme (which is equivalent to how things were before). If they are using a technodata_timeslices file to specify utilization factor/minimum service factor at the timeslice level, this file must match up with the timeslice level of the sector.

The timeslice_level argument gets propagated to all the subsectors and agents that the sector owns. Each iteration of the MCA, each sector is passed the market object (commodity supply, consumption and prices) compressed to the appropriate timeslice level. For example, if using the "day" timeslice level, supply/consumption data gets summed over every hour in each day, and prices data gets averaged (weighted according to timeslice length). The optimisation then proceeds as before, with the timeslice_level attribute of each subsector/agent ensuring that any timeslice operations (broadcast_timeslice and distribute_timeslice) are performed to the appropriate level. If any objects had mismatching timeslicing schemes, an error would be raised. Once the optimization is complete for a sector, consumption/supply/costs data from the sector gets converted back to the finest timeslice level and passed back to the market. Consumption/supply data gets distributed evenly over the expanded timeslice level(s) according to timeslice length, whereas costs data gets broadcast. This then loops over each sector, with each sector aiming to balance outgoing supply with incoming demand at the appropriate timeslice level. I think this could potentially lead to some more realistic scenarios compared to what's currently being done.

An added benefit is that the code should run faster for sectors with a coarser timeslice level, as decision variables no longer have to be optimised for every timeslice. This may, in some circumstances, also be a valid solution to fix the memory errors that can be run into with larger models (#389), as this is caused by the constraints matrix which is proportional in size to the number of timeslices squared. Obviously changing the timeslice level will also change the results, so this should only be done if it's reasonable for the model in question.

Changes in detail

Main files to look at:

  • toml.rst: Documentation
  • sector.py: New functions convert_to_sector_timeslicing and convert_to_global_timeslicing convert the market object between sector and global timeslicing schemes
  • timeslices.py: New functions compress_timeslice, expand_timeslice, sort_timeslice and get_level. Adjust broadcast_timeslice and distribute_timeslice to take a timeslice level argument. Move timeslice_max over from investments.py.
  • conftest.py: Applying the xarray patch so it's used in the tests. I thought I did that in Simplify the use of timeslices #519, but it must have got lost somewhere along the way
  • test_timeslices.py: Adding tests for the new/existing functions in timeslices.py

timeslices.py is where most of the new code is. There are now four functions for changing the timeslicing of arrays (replacing the single convert_timeslice function that was there before)?

  • broadcast_timeslice and distribute_timeslice are designed to timeslice non-timesliced arrays via two different methods
  • expand_timeslice and compress_timeslice are designed to change the number of timeslice levels in already timesliced arrays.
  • I've also added a sort_timeslice function to ensure that different arrays have timeslices in the same order. Ultimately it doesn't really matter what this order is, as long as it's consistent at each level of timeslice granularity. The simplest thing was for objects at the finest timeslice level to adopt the order specified in the settings file (actually this is important since the presets files use integers to refer to timeslices, according to this order), and objects with coarser timeslicing to sort timeslices in alphabetical order.

Otherwise, most of the remaining changes relate to passing the timeslice_level string around to different functions and classes.

Case study

I've given this a go with the default model, changing the timeslice level of the gas sector. This is what gas supply ends up looking like for two different scenarios: "hour" level, where gas supply is optimised to meet demand at the hourly level, and "day" level, where gas supply only has to meet demand at the daily level.

supply

Average supply is the same across the day (black dashed line), but the first scenario has peaks and troughs representing daily variation in demand.

As a result, the capacity look quite different. In the first scenario, the sector has to invest in more capacity to meet the peak hourly demand:

capacity_hour
capacity_day

(I've noticed that the gas price varies by a factor of 6 in the two scenarios. I think this is a bug with the way that commodity prices are calculated, not something I've done wrong here, so I'll look into it separately)

Notes

I've chosen not to have an "annual" timeslice level (i.e. balance demand/supply over the full year). In theory this is possible, but could cause havoc in the code as it would probably involve passing timesliced and non-timesliced objects through the same functions. At least as things stand there's some consistency over which objects have a "timeslice" dimension and which don't (even though the number of timeslice levels may vary). My advice for anyone wanting an "annual" level would be to add a dummy level (called "annual" or similar) to the timeslicing scheme with the same coordinate for all timeslices, then set this as the timeslice level for the sector.

There's one small remaining issue, which is that the check_demand_fulfillment function is still checking that demand is fulfilled for each individual timeslice, so may raise a warning even if supply/demand is balanced at the appropriate timeslice level. I'll fix this in another PR. It probably makes sense to move this check to each individual sector, and also mandate that no end-use commodity be outputted by more than one sector (which should be avoided anyway). Either that, or this function just checks that all commodities are balanced across the year as a whole. EDIT Now done. I've gone for the latter approach

Type of change

  • New feature (non-breaking change which adds functionality)
  • Optimization (non-breaking, back-end change that speeds up the code)
  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (whatever its nature)

Key checklist

  • All tests pass: $ python -m pytest
  • The documentation builds and looks OK: $ python -m sphinx -b html docs docs/build

Further checks

  • Code is commented, particularly in hard-to-understand areas
  • Tests added that prove fix is effective or that feature works

@tsmbland tsmbland changed the title Timeslice level Configurable timeslice level for sectors Nov 18, 2024
@tsmbland tsmbland marked this pull request as ready for review November 18, 2024 20:51
@tsmbland tsmbland requested a review from dalonsoa November 18, 2024 20:53
This was referenced Nov 20, 2024
@dalonsoa
Copy link
Collaborator

I believe I've found similar issues in the past. In principle, you can use X | Y annotation style even in Python 3.9 as long as you include from __future__ import annotations at the top of the relevant files. We do that in, for example, PyCSVY, where we support python 3.9 and use X | Y for annotations.

@dalonsoa
Copy link
Collaborator

What you cannot have is the use of __future__ and old style annotations. If you have it, you must use the new style. For example, in costs.py, we don't have that import statement and you can perfectly use Optional[str], but as you have the future import in constraints.py, that is not allowed. I think. That has always been very confusing.

Copy link
Collaborator

@dalonsoa dalonsoa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've left a collection of small comments here and there, but I don't think any of them is a blocker, so I'm approving.

This looks really neat and clear. Great work!

Additional methods can be registered with
:py:func:`muse.production.register_production`

*technodata*
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing to do with the PR, but I've always found the use of cursive headers in the documentation pretty useless: they do not highlight that much the text and you cannot directly link the section from somewhere else. I'd suggest at some point to revamp the docs and use, eg. level 5 headers or something like that. See the Sphinx docs for that

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wish we were using markdown as it's so much nicer to work with, but I guess it's too late for that

Comment on lines +127 to +132
if not get_level(self.technologies) == self.timeslice_level:
raise ValueError(
f"Technodata for {self.name} sector does not match "
"the specified timeslice level for that sector "
f"({self.timeslice_level})"
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with your implementation because it is explicit, but just in case, have you considered pulling the timeslice_level directly from the technodata rather than opening the possibility - that you check here - of having technodata timeslice information inconsistent with the declared timeslice level?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be possible for sectors where timeslice-level utilization factor and/or minimum service factor are specified. However, for sectors where this isn't specified (i.e. using a single UF/MSF for all timeslices), there's no timeslice information in the technodata so nothing to pull from, and these sectors can take any timeslice level.

In the end, it's easier just to be explicit about it for all sectors.

Comment on lines +138 to +140
broadcasted = broadcast_timeslice(data, ts=ts)
timeslice_fractions = ts / broadcast_timeslice(ts.sum(), ts=ts)
return broadcasted * timeslice_fractions
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to make sure I get this right:

  • First we broadcast the input array over the target timeslice - same value in all timeslices if it had no timeslice info, to start with
  • Then, we multiply by the timeslice fractions to ensure that the data is proportional to the timeslice fraction.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct

@tsmbland tsmbland enabled auto-merge (squash) November 26, 2024 14:03
@tsmbland tsmbland merged commit dd12ee8 into v1.3 Nov 26, 2024
14 checks passed
@tsmbland tsmbland deleted the timeslice_level branch November 26, 2024 14:42
@KOSHIMP
Copy link

KOSHIMP commented Jan 21, 2025

Dear @tsmbland Tom Blang

I tried to run my model with the new version of MUSE (version 1.3.0 and 1.3.1) but I faced a convergence issue. I am using 8 timeslices in the Power sector with the hourly timeslice_level. The other sectors in my model don't have timeslice files.

The Error message says: "ValueError: Broadcasting along the 'timeslice' dimension is required, but automatic broadcasting is disabled. Please handle it explicitly using broadcast_timeslice or distribute_timeslice (see muse.timeslices)."

Can you kindly advise how I can enable the "automatic broadcasting"?

Best regards,

image

@tsmbland
Copy link
Collaborator Author

Hi @KOSHIMP,

The error is being raised whilst generating the emission_costs output. Unfortunately, there are known issues with this output (see #421), so it cannot be used right now. The error message you're getting is different, but you should still avoid using this output until #421 is fixed.

If you try again without the emission_costs output (remove this section from your settings file), does the model now run?

Best,
Tom

@KOSHIMP
Copy link

KOSHIMP commented Jan 23, 2025

Dear @tsmbland

Thank you for your response,

After removing emission_costs output, and moving the rest of the outputs in their respective sectors (#548), the model ran successfully but took a very long time compared to MUSEv1.2.3 (from few minutes to almost 4hours). I also noticed some differences in the results and in the warning file, I noticed that the maximum number of iterations was reached for each time step.

Still trying to figure out either these convergences issues may be caused by my model...

Best

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

4 participants