Decoding sensor space data with generalization across time and conditions#

This example runs the analysis described in [1]. It illustrates how one can fit a linear classifier to identify a discriminatory topography at a given time instant and subsequently assess whether this linear model can accurately predict all of the time samples of a second set of conditions.

# Authors: Jean-Remi King <jeanremi.king@gmail.com>
#          Alexandre Gramfort <alexandre.gramfort@inria.fr>
#          Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD-3-Clause
# Copyright the MNE-Python contributors.
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler

import mne
from mne.datasets import sample
from mne.decoding import GeneralizingEstimator

print(__doc__)

# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
meg_path = data_path / "MEG" / "sample"
raw_fname = meg_path / "sample_audvis_filt-0-40_raw.fif"
events_fname = meg_path / "sample_audvis_filt-0-40_raw-eve.fif"
raw = mne.io.read_raw_fif(raw_fname, preload=True)
picks = mne.pick_types(raw.info, meg=True, exclude="bads")  # Pick MEG channels
raw.filter(1.0, 30.0, fir_design="firwin")  # Band pass filtering signals
events = mne.read_events(events_fname)
event_id = {
    "Auditory/Left": 1,
    "Auditory/Right": 2,
    "Visual/Left": 3,
    "Visual/Right": 4,
}
tmin = -0.050
tmax = 0.400
# decimate to make the example faster to run, but then use verbose='error' in
# the Epochs constructor to suppress warning about decimation causing aliasing
decim = 2
epochs = mne.Epochs(
    raw,
    events,
    event_id=event_id,
    tmin=tmin,
    tmax=tmax,
    proj=True,
    picks=picks,
    baseline=None,
    preload=True,
    reject=dict(mag=5e-12),
    decim=decim,
    verbose="error",
)
Opening raw data file /home/circleci/mne_data/MNE-sample-data/MEG/sample/sample_audvis_filt-0-40_raw.fif...
    Read a total of 4 projection items:
        PCA-v1 (1 x 102)  idle
        PCA-v2 (1 x 102)  idle
        PCA-v3 (1 x 102)  idle
        Average EEG reference (1 x 60)  idle
    Range : 6450 ... 48149 =     42.956 ...   320.665 secs
Ready.
Reading 0 ... 41699  =      0.000 ...   277.709 secs...
Filtering raw data in 1 contiguous segment
Setting up band-pass filter from 1 - 30 Hz

FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal bandpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 1.00
- Lower transition bandwidth: 1.00 Hz (-6 dB cutoff frequency: 0.50 Hz)
- Upper passband edge: 30.00 Hz
- Upper transition bandwidth: 7.50 Hz (-6 dB cutoff frequency: 33.75 Hz)
- Filter length: 497 samples (3.310 s)

[Parallel(n_jobs=1)]: Done  17 tasks      | elapsed:    0.0s
[Parallel(n_jobs=1)]: Done  71 tasks      | elapsed:    0.2s
[Parallel(n_jobs=1)]: Done 161 tasks      | elapsed:    0.4s
[Parallel(n_jobs=1)]: Done 287 tasks      | elapsed:    0.6s

We will train the classifier on all left visual vs auditory trials and test on all right visual vs auditory trials.

clf = make_pipeline(
    StandardScaler(),
    LogisticRegression(solver="liblinear"),  # liblinear is faster than lbfgs
)
time_gen = GeneralizingEstimator(clf, scoring="roc_auc", n_jobs=None, verbose=True)

# Fit classifiers on the epochs where the stimulus was presented to the left.
# Note that the experimental condition y indicates auditory or visual
time_gen.fit(X=epochs["Left"].get_data(copy=False), y=epochs["Left"].events[:, 2] > 2)
  0%|          | Fitting GeneralizingEstimator : 0/35 [00:00<?,       ?it/s]
  3%|▎         | Fitting GeneralizingEstimator : 1/35 [00:00<00:01,   29.16it/s]
 11%|█▏        | Fitting GeneralizingEstimator : 4/35 [00:00<00:00,   59.35it/s]
 17%|█▋        | Fitting GeneralizingEstimator : 6/35 [00:00<00:00,   59.14it/s]
 23%|██▎       | Fitting GeneralizingEstimator : 8/35 [00:00<00:00,   59.00it/s]
 29%|██▊       | Fitting GeneralizingEstimator : 10/35 [00:00<00:00,   59.02it/s]
 37%|███▋      | Fitting GeneralizingEstimator : 13/35 [00:00<00:00,   64.57it/s]
 46%|████▌     | Fitting GeneralizingEstimator : 16/35 [00:00<00:00,   68.55it/s]
 54%|█████▍    | Fitting GeneralizingEstimator : 19/35 [00:00<00:00,   71.54it/s]
 60%|██████    | Fitting GeneralizingEstimator : 21/35 [00:00<00:00,   69.84it/s]
 69%|██████▊   | Fitting GeneralizingEstimator : 24/35 [00:00<00:00,   72.16it/s]
 74%|███████▍  | Fitting GeneralizingEstimator : 26/35 [00:00<00:00,   70.65it/s]
 80%|████████  | Fitting GeneralizingEstimator : 28/35 [00:00<00:00,   69.39it/s]
 86%|████████▌ | Fitting GeneralizingEstimator : 30/35 [00:00<00:00,   68.35it/s]
 94%|█████████▍| Fitting GeneralizingEstimator : 33/35 [00:00<00:00,   70.31it/s]
100%|██████████| Fitting GeneralizingEstimator : 35/35 [00:00<00:00,   71.30it/s]
100%|██████████| Fitting GeneralizingEstimator : 35/35 [00:00<00:00,   70.14it/s]

Score on the epochs where the stimulus was presented to the right.

scores = time_gen.score(
    X=epochs["Right"].get_data(copy=False), y=epochs["Right"].events[:, 2] > 2
)
  0%|          | Scoring GeneralizingEstimator : 0/1225 [00:00<?,       ?it/s]
  1%|          | Scoring GeneralizingEstimator : 9/1225 [00:00<00:04,  263.14it/s]
  2%|▏         | Scoring GeneralizingEstimator : 23/1225 [00:00<00:03,  339.12it/s]
  3%|▎         | Scoring GeneralizingEstimator : 36/1225 [00:00<00:03,  353.20it/s]
  4%|▍         | Scoring GeneralizingEstimator : 50/1225 [00:00<00:03,  368.93it/s]
  5%|▌         | Scoring GeneralizingEstimator : 65/1225 [00:00<00:03,  383.80it/s]
  6%|▋         | Scoring GeneralizingEstimator : 78/1225 [00:00<00:02,  383.92it/s]
  7%|▋         | Scoring GeneralizingEstimator : 90/1225 [00:00<00:03,  378.04it/s]
  8%|▊         | Scoring GeneralizingEstimator : 104/1225 [00:00<00:02,  382.84it/s]
 10%|▉         | Scoring GeneralizingEstimator : 118/1225 [00:00<00:02,  387.17it/s]
 11%|█         | Scoring GeneralizingEstimator : 131/1225 [00:00<00:02,  386.82it/s]
 12%|█▏        | Scoring GeneralizingEstimator : 144/1225 [00:00<00:02,  385.91it/s]
 13%|█▎        | Scoring GeneralizingEstimator : 158/1225 [00:00<00:02,  388.76it/s]
 14%|█▍        | Scoring GeneralizingEstimator : 172/1225 [00:00<00:02,  391.33it/s]
 15%|█▍        | Scoring GeneralizingEstimator : 183/1225 [00:00<00:02,  384.91it/s]
 16%|█▌        | Scoring GeneralizingEstimator : 194/1225 [00:00<00:02,  378.97it/s]
 17%|█▋        | Scoring GeneralizingEstimator : 209/1225 [00:00<00:02,  384.78it/s]
 18%|█▊        | Scoring GeneralizingEstimator : 224/1225 [00:00<00:02,  389.51it/s]
 20%|█▉        | Scoring GeneralizingEstimator : 240/1225 [00:00<00:02,  396.31it/s]
 21%|██        | Scoring GeneralizingEstimator : 255/1225 [00:00<00:02,  400.13it/s]
 22%|██▏       | Scoring GeneralizingEstimator : 267/1225 [00:00<00:02,  396.35it/s]
 23%|██▎       | Scoring GeneralizingEstimator : 280/1225 [00:00<00:02,  395.44it/s]
 24%|██▍       | Scoring GeneralizingEstimator : 292/1225 [00:00<00:02,  392.40it/s]
 25%|██▌       | Scoring GeneralizingEstimator : 308/1225 [00:00<00:02,  398.05it/s]
 26%|██▋       | Scoring GeneralizingEstimator : 322/1225 [00:00<00:02,  399.12it/s]
 27%|██▋       | Scoring GeneralizingEstimator : 334/1225 [00:00<00:02,  395.73it/s]
 28%|██▊       | Scoring GeneralizingEstimator : 343/1225 [00:00<00:02,  387.01it/s]
 29%|██▊       | Scoring GeneralizingEstimator : 351/1225 [00:00<00:02,  376.85it/s]
 29%|██▉       | Scoring GeneralizingEstimator : 360/1225 [00:00<00:02,  369.53it/s]
 30%|███       | Scoring GeneralizingEstimator : 371/1225 [00:00<00:02,  366.54it/s]
 31%|███▏      | Scoring GeneralizingEstimator : 385/1225 [00:01<00:02,  369.61it/s]
 32%|███▏      | Scoring GeneralizingEstimator : 397/1225 [00:01<00:02,  368.57it/s]
 33%|███▎      | Scoring GeneralizingEstimator : 409/1225 [00:01<00:02,  367.67it/s]
 34%|███▍      | Scoring GeneralizingEstimator : 421/1225 [00:01<00:02,  366.89it/s]
 35%|███▌      | Scoring GeneralizingEstimator : 434/1225 [00:01<00:02,  367.97it/s]
 37%|███▋      | Scoring GeneralizingEstimator : 448/1225 [00:01<00:02,  370.75it/s]
 38%|███▊      | Scoring GeneralizingEstimator : 462/1225 [00:01<00:02,  373.29it/s]
 39%|███▊      | Scoring GeneralizingEstimator : 473/1225 [00:01<00:02,  370.33it/s]
 40%|███▉      | Scoring GeneralizingEstimator : 485/1225 [00:01<00:02,  369.38it/s]
 41%|████      | Scoring GeneralizingEstimator : 498/1225 [00:01<00:01,  370.10it/s]
 42%|████▏     | Scoring GeneralizingEstimator : 510/1225 [00:01<00:01,  369.00it/s]
 43%|████▎     | Scoring GeneralizingEstimator : 521/1225 [00:01<00:01,  366.48it/s]
 44%|████▎     | Scoring GeneralizingEstimator : 534/1225 [00:01<00:01,  367.21it/s]
 45%|████▍     | Scoring GeneralizingEstimator : 546/1225 [00:01<00:01,  366.19it/s]
 46%|████▌     | Scoring GeneralizingEstimator : 559/1225 [00:01<00:01,  367.13it/s]
 47%|████▋     | Scoring GeneralizingEstimator : 574/1225 [00:01<00:01,  371.05it/s]
 48%|████▊     | Scoring GeneralizingEstimator : 588/1225 [00:01<00:01,  373.20it/s]
 49%|████▉     | Scoring GeneralizingEstimator : 598/1225 [00:01<00:01,  368.99it/s]
 50%|████▉     | Scoring GeneralizingEstimator : 608/1225 [00:01<00:01,  364.98it/s]
 51%|█████     | Scoring GeneralizingEstimator : 619/1225 [00:01<00:01,  362.72it/s]
 52%|█████▏    | Scoring GeneralizingEstimator : 634/1225 [00:01<00:01,  367.07it/s]
 53%|█████▎    | Scoring GeneralizingEstimator : 650/1225 [00:01<00:01,  372.57it/s]
 54%|█████▍    | Scoring GeneralizingEstimator : 664/1225 [00:01<00:01,  374.78it/s]
 56%|█████▌    | Scoring GeneralizingEstimator : 680/1225 [00:01<00:01,  380.06it/s]
 57%|█████▋    | Scoring GeneralizingEstimator : 695/1225 [00:01<00:01,  383.08it/s]
 58%|█████▊    | Scoring GeneralizingEstimator : 710/1225 [00:01<00:01,  386.23it/s]
 59%|█████▉    | Scoring GeneralizingEstimator : 725/1225 [00:01<00:01,  389.28it/s]
 60%|██████    | Scoring GeneralizingEstimator : 739/1225 [00:01<00:01,  390.58it/s]
 61%|██████▏   | Scoring GeneralizingEstimator : 753/1225 [00:01<00:01,  391.71it/s]
 63%|██████▎   | Scoring GeneralizingEstimator : 767/1225 [00:02<00:01,  392.54it/s]
 64%|██████▎   | Scoring GeneralizingEstimator : 780/1225 [00:02<00:01,  392.11it/s]
 65%|██████▍   | Scoring GeneralizingEstimator : 795/1225 [00:02<00:01,  394.78it/s]
 66%|██████▌   | Scoring GeneralizingEstimator : 810/1225 [00:02<00:01,  397.03it/s]
 67%|██████▋   | Scoring GeneralizingEstimator : 821/1225 [00:02<00:01,  392.78it/s]
 68%|██████▊   | Scoring GeneralizingEstimator : 834/1225 [00:02<00:00,  391.78it/s]
 69%|██████▉   | Scoring GeneralizingEstimator : 847/1225 [00:02<00:00,  391.40it/s]
 70%|███████   | Scoring GeneralizingEstimator : 860/1225 [00:02<00:00,  390.99it/s]
 71%|███████   | Scoring GeneralizingEstimator : 871/1225 [00:02<00:00,  387.55it/s]
 72%|███████▏  | Scoring GeneralizingEstimator : 882/1225 [00:02<00:00,  384.30it/s]
 73%|███████▎  | Scoring GeneralizingEstimator : 895/1225 [00:02<00:00,  384.28it/s]
 74%|███████▍  | Scoring GeneralizingEstimator : 906/1225 [00:02<00:00,  380.88it/s]
 75%|███████▍  | Scoring GeneralizingEstimator : 914/1225 [00:02<00:00,  373.50it/s]
 75%|███████▌  | Scoring GeneralizingEstimator : 924/1225 [00:02<00:00,  369.53it/s]
 76%|███████▌  | Scoring GeneralizingEstimator : 934/1225 [00:02<00:00,  365.77it/s]
 77%|███████▋  | Scoring GeneralizingEstimator : 943/1225 [00:02<00:00,  360.63it/s]
 78%|███████▊  | Scoring GeneralizingEstimator : 952/1225 [00:02<00:00,  355.68it/s]
 78%|███████▊  | Scoring GeneralizingEstimator : 961/1225 [00:02<00:00,  351.15it/s]
 79%|███████▉  | Scoring GeneralizingEstimator : 970/1225 [00:02<00:00,  346.74it/s]
 80%|███████▉  | Scoring GeneralizingEstimator : 979/1225 [00:02<00:00,  342.54it/s]
 81%|████████  | Scoring GeneralizingEstimator : 992/1225 [00:02<00:00,  344.63it/s]
 82%|████████▏ | Scoring GeneralizingEstimator : 1006/1225 [00:02<00:00,  347.85it/s]
 83%|████████▎ | Scoring GeneralizingEstimator : 1017/1225 [00:02<00:00,  346.64it/s]
 84%|████████▍ | Scoring GeneralizingEstimator : 1028/1225 [00:02<00:00,  345.59it/s]
 85%|████████▍ | Scoring GeneralizingEstimator : 1040/1225 [00:02<00:00,  345.79it/s]
 86%|████████▌ | Scoring GeneralizingEstimator : 1050/1225 [00:02<00:00,  343.23it/s]
 86%|████████▋ | Scoring GeneralizingEstimator : 1059/1225 [00:02<00:00,  339.29it/s]
 87%|████████▋ | Scoring GeneralizingEstimator : 1069/1225 [00:02<00:00,  336.86it/s]
 88%|████████▊ | Scoring GeneralizingEstimator : 1080/1225 [00:02<00:00,  335.94it/s]
 89%|████████▉ | Scoring GeneralizingEstimator : 1092/1225 [00:02<00:00,  336.87it/s]
 90%|█████████ | Scoring GeneralizingEstimator : 1105/1225 [00:03<00:00,  339.28it/s]
 91%|█████████▏| Scoring GeneralizingEstimator : 1119/1225 [00:03<00:00,  343.03it/s]
 92%|█████████▏| Scoring GeneralizingEstimator : 1131/1225 [00:03<00:00,  343.65it/s]
 93%|█████████▎| Scoring GeneralizingEstimator : 1145/1225 [00:03<00:00,  347.03it/s]
 95%|█████████▍| Scoring GeneralizingEstimator : 1158/1225 [00:03<00:00,  348.47it/s]
 95%|█████████▌| Scoring GeneralizingEstimator : 1167/1225 [00:03<00:00,  344.25it/s]
 96%|█████████▌| Scoring GeneralizingEstimator : 1176/1225 [00:03<00:00,  340.29it/s]
 97%|█████████▋| Scoring GeneralizingEstimator : 1187/1225 [00:03<00:00,  339.52it/s]
 98%|█████████▊| Scoring GeneralizingEstimator : 1198/1225 [00:03<00:00,  338.80it/s]
 99%|█████████▉| Scoring GeneralizingEstimator : 1212/1225 [00:03<00:00,  342.29it/s]
100%|█████████▉| Scoring GeneralizingEstimator : 1223/1225 [00:03<00:00,  341.41it/s]
100%|██████████| Scoring GeneralizingEstimator : 1225/1225 [00:03<00:00,  363.24it/s]

Plot

fig, ax = plt.subplots(layout="constrained")
im = ax.matshow(
    scores,
    vmin=0,
    vmax=1.0,
    cmap="RdBu_r",
    origin="lower",
    extent=epochs.times[[0, -1, 0, -1]],
)
ax.axhline(0.0, color="k")
ax.axvline(0.0, color="k")
ax.xaxis.set_ticks_position("bottom")
ax.set_xlabel(
    'Condition: "Right"\nTesting Time (s)',
)
ax.set_ylabel('Condition: "Left"\nTraining Time (s)')
ax.set_title("Generalization across time and condition", fontweight="bold")
fig.colorbar(im, ax=ax, label="Performance (ROC AUC)")
plt.show()
Generalization across time and condition

References#

Total running time of the script: (0 minutes 8.677 seconds)

Estimated memory usage: 175 MB

Gallery generated by Sphinx-Gallery