This documentation is for development version 0.18.dev0.

mne.minimum_norm.estimate_snr

mne.minimum_norm.estimate_snr(evoked, inv, verbose=None)[source]

Estimate the SNR as a function of time for evoked data.

Parameters:
evoked : instance of Evoked

Evoked instance.

inv : instance of InverseOperator

The inverse operator.

verbose : bool, str, int, or None

If not None, override default verbose level (see mne.verbose() and Logging documentation for more).

Returns:
snr : ndarray, shape (n_times,)

The SNR estimated from the whitened data.

snr_est : ndarray, shape (n_times,)

The SNR estimated using the mismatch between the unregularized solution and the regularized solution.

Notes

snr_est is estimated by using different amounts of inverse regularization and checking the mismatch between predicted and measured whitened data.

In more detail, given our whitened inverse obtained from SVD:

\[\tilde{M} = R^\frac{1}{2}V\Gamma U^T\]

The values in the diagonal matrix \(\Gamma\) are expressed in terms of the chosen regularization \(\lambda\approx\frac{1}{\rm{SNR}^2}\) and singular values \(\lambda_k\) as:

\[\gamma_k = \frac{1}{\lambda_k}\frac{\lambda_k^2}{\lambda_k^2 + \lambda^2}\]

We also know that our predicted data is given by:

\[\hat{x}(t) = G\hat{j}(t)=C^\frac{1}{2}U\Pi w(t)\]

And thus our predicted whitened data is just:

\[\hat{w}(t) = U\Pi w(t)\]

Where \(\Pi\) is diagonal with entries entries:

\[\lambda_k\gamma_k = \frac{\lambda_k^2}{\lambda_k^2 + \lambda^2}\]

If we use no regularization, note that \(\Pi\) is just the identity matrix. Here we test the squared magnitude of the difference between unregularized solution and regularized solutions, choosing the biggest regularization that achieves a \(\chi^2\)-test significance of 0.001.

New in version 0.9.0.