This documentation is for development version 0.18.dev0.

mne.beamformer.rap_music

mne.beamformer.rap_music(evoked, forward, noise_cov, n_dipoles=5, return_residual=False, verbose=None)[source]

RAP-MUSIC source localization method.

Compute Recursively Applied and Projected MUltiple SIgnal Classification (RAP-MUSIC) on evoked data.

Note

The goodness of fit (GOF) of all the returned dipoles is the same and corresponds to the GOF of the full set of dipoles.

Parameters:
evoked : instance of Evoked

Evoked data to localize.

forward : instance of Forward

Forward operator.

noise_cov : instance of Covariance

The noise covariance.

n_dipoles : int

The number of dipoles to look for. The default value is 5.

return_residual : bool

If True, the residual is returned as an Evoked instance.

verbose : bool, str, int, or None

If not None, override default verbose level (see mne.verbose() and Logging documentation for more).

Returns:
dipoles : list of instance of Dipole

The dipole fits.

residual : instance of Evoked

The residual a.k.a. data not explained by the dipoles. Only returned if return_residual is True.

See also

mne.fit_dipole

Notes

The references are:

J.C. Mosher and R.M. Leahy. 1999. Source localization using recursively applied and projected (RAP) MUSIC. Signal Processing, IEEE Trans. 47, 2 (February 1999), 332-340. DOI=10.1109/78.740118 https://doi.org/10.1109/78.740118

Mosher, J.C.; Leahy, R.M., EEG and MEG source localization using recursively applied (RAP) MUSIC, Signals, Systems and Computers, 1996. pp.1201,1207 vol.2, 3-6 Nov. 1996 doi: 10.1109/ACSSC.1996.599135

New in version 0.9.0.

Examples using mne.beamformer.rap_music