Visual Neuroscience Lab
  • Home
  • Neurodesktop
  • Behavioral
  • Labmeeting
  • DICOM to GLM
  • MRI
  • People
  • Internship
  • Other Topics
  • Archive
  1. MRI: from DICOM to GLM Analysis
  2. 08 - first level GLM
  • Home
  • Neurodesktop
  • MRI: from DICOM to GLM Analysis
    • 00 - Overview
    • 01 - DICOM to BIDS
    • 02 - Check IntendedFor field of the fmap’s JSON
    • 03 - T1w for PBn
    • 04 - Defacing
    • 05 - Removing Noise Scan
    • 06 - fMRIprep
    • 07 - Retinotopic Mapping
    • 08 - first level GLM
    • 09- second level GLM
    • Troubleshooting
  • MRI
    • MRI-Studies: General Information
    • 3D printing a brain
    • 7T anatomical analysis on neurodesk
    • MRI BIDS and OSF
    • fMRIprep analysis
    • MNI space
    • Old MRI setup
    • Update to the MRI lab setup
    • Physiological noise correction
  • Behavioral
    • Vision tests
    • BIDS for behavioral studies
    • Custom-built chin rests
  • People
    • Stipends and prizes for master and PhD students
    • Coffee time
    • Participant compensation
    • Things to do if you are leaving the lab
    • Scientific writing in practice
    • Research funding opportunities
  • Internship
    • Pflichtpraktikum
    • Internship Checklist
    • internship/claustrum_segmentation.qmd
  • Other Topics
    • Calculating a priori sample size using G Power
    • Gamma correction
    • Contribute to the website
    • Older ways to access the server
  • Labmeeting
  • Archive

On this page

  • 1 Surface-based
  • 2 Volume-based Analysis
  1. MRI: from DICOM to GLM Analysis
  2. 08 - first level GLM

08 - first level GLM

MRI
Analysis
fmriPrep
Neurodesk
nilearn
Several approaches to run a first level GLM using nilearn
Author

MG

Published

May 5, 2025

After preprocessing your data, it´s time to do your first level GLM (= general linear model).

In this step, you will compute mean contrast-estimates for predefined contrasts for each subject individually. These estimations will be used later in the second level Analysis, where you extract relevant (for your research question/hypothesis) values.

1 Surface-based

This page describes the first level GLM for a previous master thesis. It included 1 localizer run and 8 experimental runs. The experimental runs differed in the used attention task (“L”etter vs. “C”olor) and the flicker-condition (“F”ull vs. “N”othing; = flickering against full circles or against the background). Every “attention task” x “flicker condition” combination was presented two times. For every task (Loc, LF, LN, CF, CN), a separate GLM had to be computed. To analyze this data, a surface-based approach was chosen. Important: This study only included one session! If you did multiple sessions, you have to adapt the script accordingly!

You can find the example jupyter notebook here: /shared/website/GLM_surface_Main.ipynb

Keep in mind, that some processes might take some time!

Short explanation of the process (enumeration corresponds to the cell number in the notebook):

  1. Load necessary packages/modules

  2. Helper function to get SurfaceImages (can be ignored, hopefully not necessary in the future)

  3. Definition of paths, parameters and constants

    • if you only have one task or one subject, enter the corresponding name/id, but make sure that it is within the squared brackets! (Subsequent processes loop over these lists! If there is only one entry, it just stops after one iteration)
  4. Extract the first level GLM from your BIDS structure (if everything is defined correctly in the cell before, you can simply run this without any changes)

  5. Optimal: Uncomment (= remove the #) and run to confirm for example the number of FirstLevelModel objects that were extracted

  6. Get the surface data with the workaround (if everything is/was defined correctly, you can simply run this without any changes)

  7. In this cell, the models are acutally fitted to the data. Again, if everything was defined correctly you can simply run this cell (Even though the processes are parallelized, this probably takes some time!)

  8. Now you define the contrasts.

    • In this case, it loops through the tasks (definetely necessary) and subjects (could potentially be omitted if everything is the same for all participants) and defines the corresponding contrasts
    • first a directory is set up to store the design matrix (plot_design_matrix immediately afterwards) and the contrast visualizations (plot_contrast_matrix at the end)
    • You have to think through your task & subject structure (influences how and how many objects you have for example in fitted_models) to ensure that the logic holds! (in this the GLM for the first task is stored for the 5 subjects, then for the second task, etc.; depending on your setup, you might need to change the for loops)
    • basic_contrasts is a dictionary with “column-name” as key and the column as value for each column of the design matrix (\(\to\) has as many entries/key-value-pairs as the design matrix has columns)
    • contrasts is the actual definition of your contrasts and is implemented as a dictionary as well. The key is the name of your contrast (can be chosen freely, but it is advised to use a “descriptive” name, such that you later immediately know what the contrast is about). The value is the actual contrast, e.g. one condition vs. another. Make sure that the value in the squared brackets behind basic_contrasts corresponds to the actual name of the column in the design matrix (as basic_contrasts is a dictionary and with basic_contrasts[key_name], it searches a key named key_name and returns its stored value)
  9. Now you actually run the contrasts (i.e. you compute the values for the subjects/tasks)

    • change the name of the lists within the for loops. You need as many empty lists as you defined contrasts in the previous step
    • for every contrast (and thus list) you need the .compute_contrast section with the corresponding name that you defined in the previous step
  10. Saves the activation maps (should run without any changes)

  11. Saves the activation maps (should run without any changes). These are the files with which you later work in the second level GLM

The notebook /shared/website/GLM_surface_Loc.ipynb was used for the GLM of the localizer data. The structure and logic is the same is for the main experiment

2 Volume-based Analysis

If you do a volume-based Analysis, you can also do this with nilearn and a jupyter notebook. You can find an example here: /shared/website/GLM_volume.ipynb. The structure and logic again is similar, but some stepts have to be done differently compared to a surface-based analysis.

Back to top