GLM Level 1#

!!! note

Quotes on this page are from Poldrack, Mumford and Nichols (2011) unless otherwise noted.

Typically this will be run- or subject-level analysis. The smallest continuous scanning session.

BOLD signal#

!! Add more here from earlier in the chapter !!

Parametric regressors#

PMN on parametric regressors (p. 81, 82)

“some feature of a stimulus or task can be parametrically varied, with the expectation that this will be reflected in the strength of the neural response.”

“It is important that the height values are demeaned prior to creating the regressor, in order to ensure that the parametric regressor is not correlated with the unmodulated regressor. It is also important that the GLM always include an unmodulated regressor in addition to the parametrically modulated regressor. This is analogous to the need to include an intercept term in a linear regression model along with the slope term, since without an intercept it is assumed that the fitted line will go through the origin.”

Response time modeling#

PMN on response time modeling (p. 82, 83)

“a stimulus that is twice as long will also have a BOLD response that is about twice as high (red versus blue line). Thus, trials with longer processing times could have much greater activation simply due to the amount of time on the task, rather than reflecting any qualitative difference in the nature of neural processing. In fact, it is likely that many differences in activation between conditions observed in functional neuroimaging studies are due simply to the fact that one condition takes longer than the other.”

“A second alternative, which we prefer, is to create the primary regressor using a constant duration, but then include an additional parametric regressor that varies according to response time. This will ensure that the effects of response time are removed from the model and also allows the separate interrogation of constant effects and effects that vary with response time.”

!! Add more here from Mumford et al. (2023) !!

Nuisance regressors#

What is a nuisance regressor? From PMN (p. 84) on response time modeling:

“The term nuisance is used to describe regressors that are included in the model to pick up extra variability in the data when there is no interest in carrying out inference on the corresponding parameters.”

Correlation between regressors#

One reason why one should check correlation in the design matrix described regarding the motion regressors but applies more generally (Poldrack, Mumford and Nichols p. 84):

“if the motion is correlated with the task, inclusion of the motion regressors may eliminate significant regions seen other- wise. This is because the GLM bases the significance of experimental effects only on the variability uniquely attributable to experimental sources. When such variability is indistinguishable between motion and experimental sources, the significance of the results is reduced, which prevents movement-induced false positives.”

Ok, so correlations in the design matrix are not good. So what about orthogonalizing the regressors (i.e. using the residuals from regressing one of the correlated regressors on the other)?

“because of the arbitrary apportionment of variabilty and signficance just demonstrated that we normally recommend against orthogonalization”. I.e. if you orthogonalize regressors the orthogonalized regressor will still contain the unique variance associated with it but the regressor that is left as is would have an inflated impact.

So more generally:

  • whenever possible have low correlation in the design matrix

  • if there is still correlation know where and to what degree it is, so you can take into account the shared variability when interpreting unique variance associated with each regressor.

BOLD noise#

There are two types of noise. White noise that is not focused on specific frequency and colored or structure noise that occurs at particular frequencies (often reflecting a coherent source of variability).

Drift (a.k.a 1/f noise, inverse frequency, low frequency)#

  • Origin: at the time of PMN it wasn’t known why this happened.

  • Implication: “when planning a study the frequency of the task does not fall into the range between 0 and 0.015 Hz where the low-frequency noise has typically been found, meaning that the frequency of trial presentation should be faster than one cycle every 65–70 seconds (i.e., a block length of no more than about 35 seconds for an on–off blocked design).”

  • Fix: Step 1 - high-pass filtering. Step 2 - prewhitening (because GLM assummes that each time point is independent, i.e. that there is no autocorrelation).

    • High-pass filtering: add discrete cosine transform (DCT) basis set to design matrix. “As a rough rule of thumb, the longest period of the drift DCT basis should be at least twice the period of an on–off block design.”. Another approach is to use LOWESS, as done in FSL but both yield very similar results.

    • Prewhitening: GLM assumes no autocorrelation (each observation is independent) and no heteroscesdaticity (variability across time points should be identical). Prewhitening is intended to deal with both of these issues. “In the first step, the GLM is fit ignoring temporal autocorrelation to obtain the model residuals, which is the original data with all modeled variability removed. The residuals are then used to estimate the autocorrelation structure, and then model estimation is carried out after prewhitening both the data and the design matrix.” See PMN p. 90-91 for details of the math, the different models for autocorrelation and how SPM and FSL implements prewhitening.

!!! note

Before prewhitening was implemented with good autocorrelation models this step was replaced with precoloring (a.k.a. temporal smoothing or low-pass filtering). This is now not recommended because, especially for event-related designs, it can result in removal of task-related signal that is spread to higher frequencies.

Why do we jitter inter-stimulus intervals? To increase the efficiency of the design matrix, i.e. the increase the power to detect unique variance associated with each regressor. See PMN Eq. 5.5. and section 5.3.2 for the details of the math.