Skip to content

Conversation

Legend101Zz
Copy link
Contributor

@Legend101Zz Legend101Zz commented Jul 22, 2025

Closes #1368

Based on the great discussions and instructions in the issue (thanks, they were really helpful!), I've started the implementation for a unified rate-handling mechanism.

So far, I've created the abstract RateMonitor base class and refactored PopulationRateMonitor to inherit from it. I've also added the new binned() method to PopulationRateMonitor to allow for re-binning of the recorded population rate.

So I guess , EventMonitor is a bit more complex since it handles generic events, not just spikes. I'll need to look more closely at how to pass the correct variable information to the template for that implementation.

Did:

  • Added the abstract RateMonitor class.
  • Refactored PopulationRateMonitor to inherit from RateMonitor.
  • Implemented the binned() method for PopulationRateMonitor.

ToDo:

  • Implement the binned() method for EventMonitor to calculate binned rates from event times.
  • Thoroughly test the new functionality.
  • Write unit tests for the new methods.

@Legend101Zz
Copy link
Contributor Author

@mstimberg as initially anticipated, adding rate analysis directly to EventMonitor might be a bit more involved. Since EventMonitor supports general events (not just spikes), we’d need to pass in information about which variable should be used for rate calculation to the code generation templates.

That said, I might be overthinking it—perhaps we can restrict it for spikes only :)

@Legend101Zz
Copy link
Contributor Author

@mstimberg bump on this :)

@mstimberg
Copy link
Member

Hi @Legend101Zz. Sorry, I did not yet have time to look into the code in detail, but regarding the EventMonitor: it shouldn't actually change anything here with respect to SpikeMonitor. A few quick remarks:

  • the spikemonitor templates have their name for historical reasons only, they are actually implementing the monitor for arbitrary events (they get the name of an eventspace passed to the template, which stores the events that were triggered in the current time step. In the case of a SpikeMonitor, these events are spikes, but it does not make any different if they are something else – the eventspace is always a list of neuron indices.
  • for the added binning/smoothing functions we are discussing here, the templates do not come into play, since the whole analysis happens on the Python side after the simulation – the monitors will still record everything in the same way as before.
  • For this analysis, it does not matter what the events are, we only need the spike/event trains that give us the times of the spikes/events per neuron. A SpikeMonitor has a spike_trains method because that is the term that is commonly used, but all it does is return self.event_trains()...

@Legend101Zz
Copy link
Contributor Author

Legend101Zz commented Sep 14, 2025

@mstimberg hope you are doing well :)

So I add a very peculiar bug in the code I wrote and it a while to catch that , phew ...

for a very brief context here for the binned fn for eventmonitor

        if (bin_size / self.clock.dt) % 1 > 1e-6:
            raise ValueError("bin_size has to be a multiple of dt.")

        # Get the total duration and number of bins
        duration = self.clock.t
         num_bins = int(duration / bin_size)

I had a slip and had duration = self.clock.dt which led to weird behaviour as I had just a single bin everytime , that said ... now this seems to be working good sharing some snippets , what do you think ?

Screenshot 2025-09-14 at 17 47 22 Screenshot 2025-09-14 at 17 50 00 Screenshot 2025-09-14 at 17 59 16

Next I'll add the tests and also check why are the current tests failing ... :)

@Legend101Zz
Copy link
Contributor Author

also @mstimberg can you pleaase guide me here , why do we get this error

AttributeError: 'Word' object has no attribute 're_match'
It seems unrelated to the rate monitoring implementation , happening in this PR. ?

smoothed2 = r_mon.smooth_rate(window="flat", width=5.4 * defaultclock.dt)
assert_array_equal(smoothed, smoothed2)

# The window size should round up to 7.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needed to fix this test

@mstimberg
Copy link
Member

Regarding test failures: at least partly this is explained by a bug in pyparsing 3.2.4 (see pyparsing/pyparsing#618) – consider merging in master or cherry-picking its last commit (which simply changes the requirement to <3.2.4 as a workaround).

@Legend101Zz
Copy link
Contributor Author

Legend101Zz commented Sep 16, 2025

Regardingg w test failures: at least partly this is explained by a bug in pyparsing 3.2.4 (see pyparsing/pyparsing#618) – consider merging in master or cherry-picking its last commit (which simply changes the requirement to <3.2.4 as a workaround).

@mstimberg what is strange I just did this change :

Screenshot 2025-09-16 at 23 38 27

and everything works now 😵‍💫

maybe it can be because the master branch was merged here ... or something related to not having assert statements inside classes , expect for tests ...

But all testsuites pass now 🥳

@mstimberg
Copy link
Member

maybe it can be because the master branch was merged here ... or something related to not having assert statements inside classes , expect for tests ...

The PR tests actually run on the merged version, i.e. the workaround from master was used, that's why you don't see the failure any more.

@Legend101Zz
Copy link
Contributor Author

maybe it can be because the master branch was merged here ... or something related to not having assert statements inside classes , expect for tests ...

The PR tests actually run on the merged version, i.e. the workaround from master was used, that's why you don't see the failure any more.

Ahh understood thanks , that makes more sense , also can you please test and let me know if this was what you expected from the newly added smooth_rate ( for spike monitor) or if we'd have to change things , because for me the challenge was dealing with np.convolve for EventMonitor/SpikeMonitor case as they return a binned 2D array ( neurons * bins) and so we had to convolve each neuron seperately , to me that seemed right , but please let me know if that makes sense :)

Copy link
Member

@mstimberg mstimberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this looks very good to me and appears to work correctly! I've left a few comments, mostly I think there are a few places where we can greatly simplify the code by being less generic and using numpy functions. The final missing piece would be the user documentation, but let's wait until we are sure about everything else before tackling this.



class PopulationRateMonitor(Group, CodeRunner):
class RateMonitor(CodeRunner, Group, Clock, ABC):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bit confused: why does RateMonitor has to inherit from Clock? (It has a clock, but it isn't a clock)

Returns
-------
rate : `Quantity`
The smoothed firing rate(s) in Hz. For EventMonitor subclasses,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this docstring is slightly incorrect: the smoothing function does not use any binning, the resulting array will always have the size of the original number of time steps.

np.convolve(self.rate_, window * 1.0 / sum(window), mode="same"),
dim=hertz.dim,
)
num_bins = int(self.clock.t / bin_size)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
num_bins = int(self.clock.t / bin_size)
num_bins = int(round(self.clock.t / bin_size))

This is safer for numerical edge cases.

np.arange(num_bins) * bin_size + bin_size / 2.0
) # as we want Bin centers (not edges)

t_indices = (self.t / self.clock.dt).astype(int)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
t_indices = (self.t / self.clock.dt).astype(int)
t_indices = np.round(self.t / self.clock.dt).astype(int)

Same here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, let's discuss this during our meeting, the issue is slightly more complicated than this.

) # as we want Bin centers (not edges)

t_indices = (self.t / self.clock.dt).astype(int)
bin_indices = (t_indices * self.clock.dt / bin_size).astype(int)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
bin_indices = (t_indices * self.clock.dt / bin_size).astype(int)
bin_indices = np.round(t_indices * self.clock.dt / bin_size).astype(int)

binned_values[neuron_idx, bin_idx] += 1

# Convert counts to rates (Hz)
binned_values = binned_values / float(bin_size)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something I didn't think about earlier: if we don't return spike counts but rates here (which probably makes sense given that we are inheriting from RateMonitor, and that this is the behaviour in the PopulationRateMonitor.binned method), then we should probably call this method binned_rate – this makes it nicely parallel with smooth_rate.

Comment on lines +426 to +447
binned_values = np.zeros((num_neurons, num_bins))
if self.record:
# Get the event times and indices
event_times = self.t[:]
event_indices = (
self.i[:] - neuron_offset
) # Adjust for subgroups as stated above

bin_indices = (event_times / bin_size).astype(int)
# Now this is the main core code , here we count the events in each bin that happened for each neuron
# Like after this we should have something like :
# Example :
# binned_values = [
# [2.0, 0.0, 1.0, 0.0, 0.0], # Neuron 0: 2 in bin 0, 1 in bin 2
# [0.0, 1.0, 0.0, 1.0, 0.0], # Neuron 1: 1 in bin 1, 1 in bin 3
# [0.0, 2.0, 0.0, 0.0, 1.0] # Neuron 2: 2 in bin 1, 1 in bin 4
# ]
for event_idx, neuron_idx in enumerate(event_indices):
if 0 <= neuron_idx < num_neurons: # sanity check
bin_idx = bin_indices[event_idx]
if bin_idx < num_bins: # To handle edge case at the end
binned_values[neuron_idx, bin_idx] += 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do not care about half-filled bins in the end of the array (and I think we should not), then I think all of this can be replaced by a call to numpy's bincount function (https://numpy.org/doc/stable/reference/generated/numpy.bincount.html) 😊

@pytest.mark.codegen_independent
def test_population_rate_monitor_binning():
"""Test binning functionality for PopulationRateMonitor."""
# Create a group with regular spiking
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The group below has random instead of regular spiking. It would probably be better to use regular spiking, though (e.g. SpikeGeneratorGroup), in that way we could have a precise test instead of only assuring that the values are in a reasonable range.

times = np.arange(40, 61) * ms # Burst from 40-60ms
indices = np.zeros(21, dtype=int)

G = SpikeGeneratorGroup(1, indices, times)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be important here to test with more than one neuron in the group.

num_neurons, num_bins = binned_array.shape
smoothed = np.zeros((num_neurons, num_bins))
for i in range(num_neurons):
smoothed[i, :] = np.convolve(binned_array[i, :], window, mode="same")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We previously depended on scipy which has a function convolve2d, but without it this is probably the best approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add monitor to estimate rates of each neuron in group
2 participants