Skip to content

Conversation

vicmarcag
Copy link
Collaborator

I have added two brand new c-VEP datasets from Martínez-Cagigal (2025a,b), summing up to 32 subjects and 13 conditions per each:

  • Martínez-Cagigal (2025), Dataset: Non-binary m-sequences for more comfortable brain–computer interfaces based on c-VEPs, DOI: https://doi.org/10.71569/025s-eq10
  • Martínez-Cagigal (2025), Dataset: Influence of spatial frequency in visual stimuli for cVEP-based BCIs: evaluation of performance and user experience, DOI: https://doi.org/10.71569/7c67-v596

I have also updated the summary_cvep.csv file, as well as pyproject.toml to require a dependency of medusa-kernel to load the original signals and then convert them to MNE format.

@bruAristimunha
Copy link
Collaborator

Hey @vicmarcag,

Looks like the conversion is made only once using the Medusa kernel, and it looks like we have dependency conflicts between Medusa and moabb.

I see two solutions to the problem:

  1. Run your code once, get the converted dataset and update the code and server to not use the medusa kernel.

  2. solve the dependency issues on both sides, on the moabb side and on the Medusa side, to allow both libraries to depend on each other. Possibly create a test and put an optional dependency, like we do with braindecode.

What do you prefer @vicmarcag?

@bruAristimunha bruAristimunha requested a review from sebVelut July 1, 2025 13:44
@bruAristimunha
Copy link
Collaborator

@sebVelut, can you review? two very big c-vep dataset for moabb :)

@vicmarcag
Copy link
Collaborator Author

Hey @vicmarcag,

Looks like the conversion is made only once using the Medusa kernel, and it looks like we have dependency conflicts between Medusa and moabb.

I see two solutions to the problem:

  1. Run your code once, get the converted dataset and update the code and server to not use the medusa kernel.
  2. solve the dependency issues on both sides, on the moabb side and on the Medusa side, to allow both libraries to depend on each other. Possibly create a test and put an optional dependency, like we do with braindecode.

What do you prefer @vicmarcag?

Hi,

First option is not possible, I'd go for the second. I have now stated medusa-kernel>=1.3 dependency instead of 1.4 so it can be used with python 3.8 and 3.9. Should I create another pull request or is it possible to update this one in this same thread?

@bruAristimunha
Copy link
Collaborator

ok ok, looks like Medusa is not super heavy, and now it is working fine.

If I understand correctly, we need Medusa because some meta information cannot possible to be loaded with normal MNE, right?

In the future, do you think this new feature from mne could be helpful?

mne-tools/mne-python#13228

sessions_per_subject=len(CONDITIONS),
events={},
code="MartinezCagigal2023Checkercvep",
interval=(0, 1), # Don't use this, it depends on the condition
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vicmarcag maybe we could have one dataset per condition, to simplify things for users. This would be quite easy with subclasses:

MartinezCagigal2023CBase  # Abstract class, defines all the logic
MartinezCagigal2023C1     # Subclass, only contains runs from condition 1
MartinezCagigal2023C2     # Subclass, only contains runs from condition 2
...

what do you think?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am supporting this idea to be able to decrease the number of operation to do after getting the data

super().__init__(
subjects=list(range(1, len(SUBJECTS) + 1)),
sessions_per_subject=len(CONDITIONS),
events={},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The events arg must be provided, otherwise the paradigms and benchmarks will not work with this dataset

@sebVelut
Copy link
Collaborator

sebVelut commented Jul 2, 2025

@sebVelut, can you review? two very big c-vep dataset for moabb :)

Yes ! I am looking at it as soon as I can ! 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants