-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use cleaned epochs for sensor-space decoding, time-frequency analysis, and covariance calculation #796
Conversation
We accidentally used the non-cleaned epochs for classification. This issue probably snuck in during the recent set of refactorings. @larsoner I've only checked the decoding scripts so far, because this is what I'm currently working with. I'm not sure if other places are affected as well. We should check the source-space scripts too. Could you kindly take over if you find the time?
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as resolved.
This comment was marked as resolved.
@@ -112,6 +111,9 @@ def run_epochs_decoding( | |||
[epochs[epochs_conds[0]], epochs[epochs_conds[1]]], verbose="error" | |||
) | |||
|
|||
# Crop to the desired analysis interval. | |||
epochs.crop(cfg.decoding_epochs_tmin, cfg.decoding_epochs_tmax) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see any problem with moving this but... how did this fix the failures ?!?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You caught me red-handed there. I believe I'm working around a bug in MNE there. It wouldn't allow me to concatenate epochs where the baseline period was cropped away. So now I first concat, then crop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's an okay workaround. A comment along those lines would help
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened an issue: mne-tools/mne-python#12153
The |
Unfortunately not, I've been trying to download that dataset for the past hour but it's just hopeless with our firewall here. |
@larsoner Btw what I suspect is happening is that we simply end up with too few clean epochs, hence CV won't work Maybe we can set the number of splits to 2 or even 1... that should fix the issue |
Ahh makes sense. So use |
I actually do like the fact that we're currently failing hard We should adjust the test case for that specific dataset instead of auto-adjusting the number of splits |
@larsoner I just tried, I cannot reproduce this locally :( |
@larsoner It seems the issue occurs during the
|
import mne
epo = mne.read_epochs("sub-01_ses-cathodalpost_task-attentionalblink_proc-clean_epo.fif")
epo gives:
|
@hoechenberger did you check on a few datasets if it boosts the decoding scores? as decoding learns discriminative features it's usually pretty good at filtering out artifacts. |
@agramfort No, I did not systematically check this. Are you saying it may have been a conscious decision to use uncleaned epochs here? Because to me, it looked unintentional (i.e., a bug) Especially since we also used the uncleaned epochs for covariance calculation |
Yes I think it was intentional although not well motivated and documented. Just my prior experience
|
Maybe we should add a setting, |
The thing is that we always use cleaned epochs to create the Evokeds, and source reconstruction also operates on these clean Evokeds. So we're introducing some sort of an inconsistency if decoding shall operate on un-cleaned epochs. One could argue that if one doesn't want to decode cleaned epochs, one could just disable SSP/ICA and the PTP rejection step… |
I would just check on a couple of datasets to see what it changes
|
@agramfort That is to say, I did find the expected effect even when using the un-cleaned data, but it's now larger and more obvious |
My sense is that things like SSP and ICA probably won't matter but that peak-to-peak rejection could help, depending on the nature of the artifacts. That sounds in line with what you're seeing @hoechenberger . In this case it makes sense for us to use the "clean" epochs for decoding. At some point we could make it an option which to use (but maybe assume YAGNI for now?) |
fine with me !
thanks for the details
… Message ID: ***@***.***>
|
We accidentally used the non-cleaned epochs for classification. This issue probably snuck in during the recent set of refactorings.
Before merging …
docs/source/changes.md
)