08 - first level GLM
After preprocessing your data, it´s time to do your first level GLM (= general linear model).
In this step, you will compute mean contrast-estimates for predefined contrasts for each subject individually. These estimations will be used later in the second level Analysis, where you extract relevant (for your research question/hypothesis) values.
1 Surface-based
This page describes the first level GLM for a previous master thesis. It included 1 localizer run and 8 experimental runs. The experimental runs differed in the used attention task (“L”etter vs. “C”olor) and the flicker-condition (“F”ull vs. “N”othing; = flickering against full circles or against the background). Every “attention task” x “flicker condition” combination was presented two times. For every task (Loc, LF, LN, CF, CN), a separate GLM had to be computed. To analyze this data, a surface-based approach was chosen. Important: This study only included one session! If you did multiple sessions, you have to adapt the script accordingly!
You can find the example jupyter notebook here: /shared/website/GLM_surface_Main.ipynb
Keep in mind, that some processes might take some time!
Short explanation of the process (enumeration corresponds to the cell number in the notebook):
Load necessary packages/modules
Helper function to get SurfaceImages (can be ignored, hopefully not necessary in the future)
Definition of paths, parameters and constants
- if you only have one task or one subject, enter the corresponding name/id, but make sure that it is within the squared brackets! (Subsequent processes loop over these lists! If there is only one entry, it just stops after one iteration)
Extract the first level GLM from your BIDS structure (if everything is defined correctly in the cell before, you can simply run this without any changes)
Optimal: Uncomment (= remove the
#
) and run to confirm for example the number ofFirstLevelModel
objects that were extractedGet the surface data with the workaround (if everything is/was defined correctly, you can simply run this without any changes)
In this cell, the models are acutally fitted to the data. Again, if everything was defined correctly you can simply run this cell (Even though the processes are parallelized, this probably takes some time!)
Now you define the contrasts.
- In this case, it loops through the tasks (definetely necessary) and subjects (could potentially be omitted if everything is the same for all participants) and defines the corresponding contrasts
- first a directory is set up to store the design matrix (
plot_design_matrix
immediately afterwards) and the contrast visualizations (plot_contrast_matrix
at the end) - You have to think through your task & subject structure (influences how and how many objects you have for example in
fitted_models
) to ensure that the logic holds! (in this the GLM for the first task is stored for the 5 subjects, then for the second task, etc.; depending on your setup, you might need to change the for loops) basic_contrasts
is a dictionary with “column-name” as key and the column as value for each column of the design matrix (\(\to\) has as many entries/key-value-pairs as the design matrix has columns)contrasts
is the actual definition of your contrasts and is implemented as a dictionary as well. The key is the name of your contrast (can be chosen freely, but it is advised to use a “descriptive” name, such that you later immediately know what the contrast is about). The value is the actual contrast, e.g. one condition vs. another. Make sure that the value in the squared brackets behindbasic_contrasts
corresponds to the actual name of the column in the design matrix (asbasic_contrasts
is a dictionary and withbasic_contrasts[key_name]
, it searches a key namedkey_name
and returns its stored value)
Now you actually run the contrasts (i.e. you compute the values for the subjects/tasks)
- change the name of the lists within the for loops. You need as many empty lists as you defined contrasts in the previous step
- for every contrast (and thus list) you need the
.compute_contrast
section with the corresponding name that you defined in the previous step
Saves the activation maps (should run without any changes)
Saves the activation maps (should run without any changes). These are the files with which you later work in the second level GLM
The notebook /shared/website/GLM_surface_Loc.ipynb
was used for the GLM of the localizer data. The structure and logic is the same is for the main experiment
2 Volume-based Analysis
If you do a volume-based Analysis, you can also do this with nilearn and a jupyter notebook. You can find an example here: /shared/website/GLM_volume.ipynb
. The structure and logic again is similar, but some stepts have to be done differently compared to a surface-based analysis.