Session 10 - fMRI NiLearn Flashcards

1
Q

What command do you use to clone the relevant git repository for the fMRI analysis?

A

!git clone -b s7_fmri –single-branch https://vcs.ynic.york.ac.uk/cn/pin-material.git

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What command do you use to install Nilearn, a required module for fMRI analysis in Colab?

A

!pip install nilearn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do you change the directory to ‘s7_fmri’ in a Python script?

A

import os
os.chdir(‘/content/pin-material/s7_fmri’)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

We have now seen the main methods for loading and saving MRI and fMRI data in Python and we have performed a simple frequency domain analysis.

We needed … to do this

A

We just needed the nibabel nib.load() function to get data into numpy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What type of file is “filtered_func_data_MNI3mm.nii.gz” typically used for in fMRI analysis?

A

It is typically used as the preprocessed functional MRI (fMRI) data, often after spatial normalization to a standard brain template (MNI3mm).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Command that sees ‘filtered_func_data_MNI3MM.nii.gz

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

We are going to move onto how to do some analysis of fMRI data in a more modern way using a module called

A

nilearn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

nilearn comes from a same family of neuroimaging software as nibabel and implemenets

A

a whole bunch of fMRI analysis tools from simple to complex.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Nilearn works with other modules and uses

A

a cool statistics and machine learning package called Scikit-Learn (sklearn: SKL) to do all sorts of fancy 21st-centure machine learning things.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Nilearn contains routines for performing

A

he standard univariate analyses that we show you at YNiC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What module is used for fMRI analysis in a modern manner, integrating with Scikit-Learn?

A

Nilearn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe the experimental design used in the example provided in notes for applying nilearn, including the duration, TR, trial types, and characteristics of each trial type - (4)

A

The experiment lasted 320s with a TR of 2.0s, resulting in 160 volumes of data.

It involved two trial types, Hand movement, and Visual flickering grating, running simultaneously.

The Hand movement condition comprised 16s of finger movement followed by 16s of rest, starting at 6s into the experiment.

The Visual flickering grating appeared on the screen at 0s, stayed on for 10s, off for 10s, and then repeated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Summary of experiment used for nilearn - hand movement and visual grating

To summarise, our experiment involved the following timeline, shown for the first minute of the experiment. + in the table below means that the given stimulus is ‘on’.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The hardest part of any fMRI analysis is telling the analysis software

A

what event happened, when it happened and how long they lasted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Often, the hardest part of any fMRI analysis is telling the analysis software what event happened and when. And how long they lasted.

It’s easy to get this wrong - as…

A

for example, by forgetting to take into account offsets in the fMRI start times, mistaking the length of the TRs or using the wrong stimulus files

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

If you are ever sitting in front of a ‘failed’ fMRI analysis, first check your

A

event files!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What format does Nilearn require for describing stimuli, and what are the necessary components? - (3)

A

Nilearn requires stimuli to be described in a ‘TAB separated file’ (TSV) format.

The necessary components include the onset time of the events (in seconds), the type of event (trial_type), and the duration of each event.

Wants these values in columns

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What would our TSV file look like for our hand movement and visual grating experiment?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

TSVs are just like CSVs except the columns are

A

separated by tabs (\t).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

We can read TSV file into

A

a Pandas dataframe and pass it directly to the fMRI analysis code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

First step in producing TSV file for an experiment of hand duration and vision duration is that - (3)

A

We will use pandas to generate a nice structure to hold the information (a table with the names, onsets and durations)

Pandas can then write that table directly to disk.

. We will also define the hand and vision stimulus durations as variables at the start - Then, if we need to change them for some reason (perhaps to analyze another dataset) we only have to change those lines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Second time in constructing TSV , after constructing duration, is the

A

constructing variables containing a list of onset times

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Second step of TSV

Constructing onset times for hand movement which said Constructing onset times for hand movement which said where the condition starts at 6 seconds into the experiment and alternates every 32 seconds. Each cycle consists of 16 seconds of activity followed by 16 seconds of rest. :

hand_duration = 16s

explaining this code - (2)

A

The ‘range()’ function is used to generate a list of onset times, starting at 6 and ending before 320 (the duration of the experiment), with a step size of ‘hand_duration * 2’.

Since ‘hand_duration’ is 16 seconds, ‘hand_duration * 2’ equals 32 seconds, ensuring the desired alternating pattern of 16s on and 16s off.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What would be output?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Second step of TSV

Constructing onset times for visual grating - which
starting at 0s and alternating every 20s (10s on, 10s off)

explain this code - (2)

vision_onsets = list(range(0, 320, vision_duration*2))

vision_duration = 10s

A

The ‘range()’ function is used to generate a list of onset times, starting at 0 and ending before 320 (the duration of the experiment), with a step size of ‘vision_duration * 2’.

Since ‘vision_duration’ is 10 seconds, ‘vision_duration * 2’ equals 20 seconds, ensuring the desired alternating pattern of 10s on and 10s off.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What would be output of this code?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

After constructing onset and duration variables for hand movement and visual we need to make

A

two Pandas dataframes - One is for the hand movements, the other is for the vision things.

ithin each data frame we keep the same ‘duration’.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Third step producing pandas dataframe

Explain the creation of the dataframe for the Hand Movement condition - (2)

A

We create a dataframe using Pandas from dictionary type key (value-pairs), where each row represents a trial of the Hand Movement condition.

The dataframe consists of three columns: ‘trial_type’, indicating the type of trial (hand_movement); ‘onset’, containing the onset times calculated previously; and ‘duration’, representing the duration of each hand movement trial (calculated hand_onset and hand_duration previously)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Third step producing pandas dataframe

Explain the creation of the dataframe for the Vision condition- (2)

A

Similar to the Hand Movement dataframe, we create a dataframe for the Vision condition.

Each row represents a trial of the Vision condition. The dataframe contains three columns: ‘trial_type’, indicating the type of trial (vision); ‘onset’, containing the onset times calculated previously; and ‘duration’, representing the duration of each vision trial.

30
Q

What is the fourth step of producing TSV file of experiment after constructing onset and duration of each condition and produce data frame of each condition? - (2)

A

Finally, we stack (‘concatenate’) the two data frames on top of each other and save them out using pandas’ dedicated CSV writer.

We tell this routine to use ‘TAB’ rather than ‘COMMA” as a separator by specifying the \t separator.

The .to_csv is a member function of the dataframe.

31
Q

Explain final step of constructing TSV - (9)

A

The final step involves combining the dataframes representing the Hand Movement and Vision conditions into a single dataframe named ‘conditions_df’.

This is achieved using the ‘pd.concat()’ function, which concatenates the two dataframes along their rows.

Once the dataframe is constructed, the next task is to save it as a TSV file.

For this purpose, a file path ‘tsv_path’ is defined, specifying the location and name of the TSV file to be created.

Then, the ‘to_csv()’ function is called on the ‘conditions_df’ dataframe.

This function writes the contents of the dataframe to a CSV file.

By setting the ‘sep’ parameter to ‘\t’, we specify that the file should be tab-separated, as required for a TSV file.

Additionally, ‘index=False’ is used to exclude row indices from the file.

This final step ensures that all trial information, including trial type, onset times, and durations, is saved into a properly formatted TSV file, ready to be used in further analysis.

32
Q

produce flashcard of steps constructin TSV file- (8)

A
  1. Define Duration Variables:
    • Define variables for the duration of each stimulus condition (e.g., hand_duration for hand movement and vision_duration for vision).
  2. Calculate Onset Times:
    • Calculate onset times for each stimulus condition using appropriate intervals and durations (e.g., hand_movement_onsets and vision_onsets).
  3. Create Dataframes:
    • Create separate dataframes for each condition (hand movement and vision) using Pandas, specifying trial type, onset times, and duration columns.
  4. Combine Dataframes:
    • Combine the separate dataframes into a single dataframe (conditions_df) using ‘pd.concat()’, ensuring all trial information is aggregated.
  5. Define File Path:
    • Define a file path (tsv_path) specifying the location and name of the TSV file to be created.
  6. Save to TSV File:
    • Use the ‘to_csv()’ function on the combined dataframe (conditions_df) to save its contents to a TSV file.
  7. Set File Format:
    • Set the ‘sep’ parameter to ‘\t’ to specify tab-separated format for the TSV file, ensuring compatibility with Nilearn.
  8. Exclude Row Indices:
    • Use ‘index=False’ to exclude row indices from being included in the TSV file, maintaining a clean structure.
33
Q

When constructing the TSV file,

A

Note that although people often have the onsets roughly in chronological order in the stimulus file, this is not required. In principle, we can list all the motor stimuli, then all the vision stimuli.

34
Q

Explain this code - plotting the timecourse of each condition - (6)

A

This code snippet begins by importing the necessary Python libraries: Pandas for data manipulation and Matplotlib for data visualization. It then proceeds to read a TSV (Tab-Separated Values) file containing trial information into a Pandas dataframe using ‘pd.read_csv()’, where the ‘\t’ (TAB) separator is specified to ensure proper parsing of the file.

Next, it defines the total duration of the experiment (320 seconds) and initializes two lists, ‘time_series[‘hand_movement’]’ and ‘time_series[‘vision’]’, with zeros. These lists will be used to represent the time courses for hand movement and vision conditions, respectively.

The code iterates through each row of the dataframe using the ‘iterrows()’ function, extracting the onset and duration of each event. For each event, it calculates the start and end time points.

Subsequently, it identifies the type of event (hand movement or vision) and updates the corresponding elements in the time series lists to 1, indicating the presence of the stimulus during the event’s duration.

Once the time series data is populated, it plots the time courses using Matplotlib. The x-axis represents time in seconds, while the y-axis indicates stimulus activity (0 for inactive and 1 for active). The plot visually represents the temporal patterns of stimulus presentation for both hand movement and vision conditions throughout the experiment.

Overall, this code snippet demonstrates how to read trial information from a TSV file, process the data to create time courses, and visualize the temporal dynamics of stimulus presentation during an fMRI experiment.

35
Q

Compare the plot of the stimulus time course with the actual fMRI data - (6)

First produced from TSV and another using nibabel from data of time series of two voxels

A

The stimulus time course, represented by the plot in the previous flashcard, is more blocky and binary in nature, with values restricted to either 0 or 1 to indicate the presence or absence of the stimulus.

In contrast, the plot of the actual fMRI data depicts the time-series signals from two voxel positions within the brain (visual and motor cortex).

These signals exhibit more dynamic behavior, reflecting the complex hemodynamic response associated with neural activity.

The fMRI signals show fluctuations over time, influenced by various physiological and neurological factors.

The stimulus time course serves as a simplified representation of the experimental conditions, while the fMRI data provides a more detailed view of brain activity patterns in response to these conditions.

Before performing the General Linear Model (GLM) analysis, the stimulus time course is convolved with a hemodynamic response function (HRF) to simulate the expected fMRI response to the experimental stimuli.

36
Q

Why do we use a step size of duration*2 when making lists of onsets? Provide a potential scenario where this approach might not be correct, particularly in event-related designs. - (4)

A

Using a step size of duration*2 ensures events are properly spaced in time series data.

However, it may not suit event-related designs where stimuli overlap or have variable durations, potentially leading to inaccuracies in timing representation.

The situation where it is not correct is using event-related designs where there is brief moments of stimulus of different conditions presented

More suited with block design with 50% duty cycle with blocks of stimuli coming on and off

37
Q

We now have all the elements to run L1 analysis with nilearn of.. and bring them together using the magic of Nilearn! - (2)

A

A nifti file (with some fMRI data) and

a TSV file with onsets and durations.

38
Q

We are going to use a slightly different version of the functional data for this section. It is the same data that we used before but

A

aligned to an MNI 152 brain

39
Q

We are going to use a slightly different fMRI data set that is aligned to MNI brain and going to load the image using nilearn:

Explain this code snippet:

A
  • Imports the ‘image’ class from the nilearn library, which deals with fMRI data.
  • Loads an fMRI dataset from the specified path using the ‘load_img’ function.
  • Stores the loaded fMRI image in the variable ‘fmri_img’.
  • Prints the shape of the loaded fMRI image, which includes information about its dimensions (e.g., number of TRs).
40
Q

Explain output of this code snippet: (45,54,45,160)

A

so the volume size is slightly different: 45x54x45x160 (the size of one functional volume) but still 160 volumes long

41
Q

We can also look at a single image from functional volume using nilearn from our functional data aligned to MNI brain by this code

explain it- (6)

A

Imports the ‘plotting’ module from nilearn, which contains functions for visualizing neuroimaging data.

  • Imports the ‘image’ module from nilearn, which deals with loading and manipulating neuroimaging data.
  • Selects the first volume (timepoint) of an fMRI image using ‘image.index_img’.
  • Plots the selected volume using ‘plotting.plot_epi’, which displays one slice of the brain at one moment in time.
  • Includes a color bar in the plot to indicate signal intensity.
  • Displays the plotted image using ‘plotting.show()’.
42
Q

What is output of this code and explain it - (6)

A

These are the raw EPI amplitudes for a single TR of fMRI data. You are not seeing the functional responses to the stimuli here - just the mean signal levels.

The functional signals are modulations of about 1% in the mean signal.

Notice that the amplitude of the signal drops from the outside to the inside of the brain.

This is driven by two things:

1: The BOLD amplitude is biggest in the gray matter (and the surface blood vessels).

2: The coils are most sensitive to things that are near them - and the centre of the brain is far away.

43
Q

We can also load and plot T1 anatomical data using image from nilearn and plotting from nilearn as well

explain this code - (6)

A
  • Imports the ‘image’ module from nilearn, which handles loading and manipulating neuroimaging data.
  • Imports the ‘plotting’ module from nilearn, containing functions for visualizing neuroimaging data.
  • Specifies the path to the anatomical (structural) MRI image file.
  • Loads the anatomical image using ‘image.load_img’.
  • Plots the loaded anatomical image using ‘plotting.plot_anat’.
  • Displays the plotted anatomical image, providing a visual representation of the brain’s structure.
44
Q

plot_img instead of plot_epi but otherwise it’s the

A

same…

45
Q

Output of this code

A
45
Q

Can do analysis in Spyder by changing following.. and adding this code after loading one image from functional.. - (8)

A

subject_anat_path=’/pin-material/s7_fmri//highres.nii.gz’ # You have to change this bit

This code Creates an interactive HTML view of the specified fMRI image (‘first_TR’) using Nilearn’s ‘view_img’ function.

  • Sets visualization parameters such as threshold (2000), maximum intensity value (vmax) of 30000, and coordinates for cutting planes.
  • Specifies the title of the HTML view as “Raw fMRI”.
  • Disables the background image using ‘bg_img=False’.
  • Opens the interactive HTML view in a web browser using ‘html_view.open_in_browser()’. - allows to browse the data in web browser
  • Allows interactive exploration of the fMRI data in the web browser, providing a convenient way to inspect the image with adjustable settings.
  • Provides a note about the alternative approach of saving the data to disk and loading it using the nibabel library.
46
Q

What are the two main components needed to run a General Linear Model (GLM) analysis in neuroimaging? - (2)

A
  1. The functional data.
  2. A file indicating when events occurred.
47
Q

What is the purpose of the design matrix in GLM analysis?

A
  • Providing simulated timecourses representing different events.
  • Facilitating regression analysis at each voxel to determine the contribution of each event to the measured timecourse.
48
Q

What is a ‘beta value’ in the context of GLM analysis? - (2)

A
  • The amount of each simulated timecourse needed to explain the measured timecourse at each voxel.
  • The contribution of each event to the measured timecourse.
49
Q

What is the process involved in simulating a BOLD timecourse using the design matrix? - (2)

A
  • Convolving the onsets and offsets of the stimulus with an estimate of the hemodynamic response function (HRF).
  • Generating a comprehensive list of simulated BOLD timecourses, typically plotted with time running downwards.
50
Q

How does Nilearn facilitate the creation of the design matrix? - (2)

A
  • Automatically generating it from a .TSV file containing event timing information.
  • Streamlining the entire analysis process, encouraging users to perform the analysis in a single workflow.
51
Q

Whats a first-level model? - (2)

A

A first-level model focuses on characterizing brain activity patterns at the level of individual subjects, typically using observed fMRI data from a single subject.

  • In contrast, a second-level model aggregates data across multiple subjects to make inferences at the group level, such as identifying consistent effects across individuals or comparing groups.
52
Q

How to instanll nilearn?

A

!pip install nilearn - not import nilearn

53
Q

Building a first level-analysis model using GLM in nilearn

Explain this code - (6)

A
  • Imports necessary modules from Nilearn for setting up and fitting a first-level General Linear Model (GLM) to functional MRI (fMRI) data.
  • Loads event information from a TSV file into a pandas DataFrame, representing experimental conditions and timing.
  • Loads fMRI data from a specified path using Nilearn’s load_img function.
  • Determines the repetition time (TR) of the fMRI data, a crucial parameter in fMRI analysis.
  • Sets up a FirstLevelModel object, specifying TR and high-pass filtering parameters for the GLM analysis.
  • This code snippet prepares the necessary components for conducting a first-level analysis of fMRI data, including loading data, defining experimental conditions, and setting up the GLM model.
54
Q

We have build a ‘first level model’ in nilearn. It is empty so far - we need to feed it with data but we can look at its parameters before we do that. - (3)

A

For example, by default it uses an estimate of the hemodynamic response function that is specified in an old paper from Gary Glover at Stanford:

We know this because we can ask about the hrf_model:

Here are the two main models you can use (‘SPM’, ‘glover’ - which are about the same) - plus another one (‘MION’) which is super-weird and used only for a special type of contrast agent.

55
Q

Exlain what this code does feeding our first-level model with fMRI data - (3)

A

his code snippet fits the first-level GLM model to the fMRI data by calling the .fit member function of the FirstLevelModel object.

  • The .fit function takes two main arguments: the fMRI data (fmri_img) and the events dataframe (events) containing experimental conditions and timing information.
  • By calling .fit, the GLM analysis is performed, and the design matrix is built ‘on the fly’, meaning it is constructed dynamically during the fitting process.
56
Q

After fitting fMRI data to GLM, we can access the design matrix:

Explain this code - (4)

A
  • After fitting the first-level GLM model to the fMRI data, the design matrix can be accessed using the ‘design_matrices_’ attribute of the FirstLevelModel object.
  • In this code snippet, the design matrix is obtained from the first element (index 0) of the list of design matrices.
  • The design matrix represents the relationship between the experimental conditions (e.g., hand movement, vision) and the observed fMRI data, with each row typically corresponding to a time point (TR) and each column representing a different experimental condition or regressor.
  • The design matrix is visualized using Nilearn’s plot_design_matrix function, providing insight into how the experimental design is encoded in the GLM analysis.
57
Q

Output of this code

A
58
Q

Explain this code producing contrasts of beta values obtained from GLM model - (5)

A
  • This code snippet computes contrasts using the beta values obtained from the fitted first-level GLM model.
  • Two contrast vectors, contrast_motor and contrast_vision, are created as numpy arrays of zeros, with the length equal to the number of columns in the design matrix.
  • A value of ‘1’ is set at the index corresponding to the trial type of interest in each contrast vector, allowing examination of the beta values associated with specific trial types.
  • The compute_contrast method of the FirstLevelModel object is then used to compute contrasts based on these vectors, producing beta maps for the specified trial types (motor and vision).
  • These beta maps provide information about the effect size of each trial type relative to the baseline or reference condition.
59
Q

We have extracted the beta maps into two arrays. We have lots of options for plotting them. Here, for example is nilearn’s ‘mosaic’ option for plotting a stat_map

Explain the code - (6)

A
  • This code snippet utilizes Nilearn’s plot_stat_map function to visualize the beta maps computed for the motor and vision stimuli.
  • Each plot_stat_map call plots a beta map for a specific stimulus, with titles indicating the corresponding trial type (‘motor’ or ‘vision’).
  • The ‘mosaic’ option for the display_mode parameter plots the beta maps in sagittal, coronal, and axial views simultaneously.
  • The black_bg=True parameter sets the background color of the plot to black.
  • A threshold value of 0.25 is applied to display only voxels with absolute effect sizes greater than or equal to 0.25.
  • The plotting.show() function displays the plotted beta maps.
60
Q

Output of this code

A

1: The data are automatically overlaid on an MNI brain. So we have lots of anatomical landmarks to help us see where we are. We can do this because we are using a version of the functional data that we have aligned to the MNI brain already (using FSL’s FLIRT command).

2: The subject was instructed to use their right hand to tap with. You might therefore expect to see motor cortex activity only on the left - but it’s present in both hemispheres (stronger on the left). I don’t know why this is but it is a common observation.

3: The visual stimulus flickered on the left side of the screen and this response is localized to the right hemisphere. That is what we expect from the strict retinotopy of the visual cortex.

4: You can see small regions in visual cortex that respond to both motor and vision stimuli. Again, not clear why - perhaps the motor activity was cued by a visual cue?

61
Q

First step of doing first-level analysis in nilearn -overall:

from nilearn.plotting import plot_design_matrix
from nilearn.glm.first_level import FirstLevelModel
import numpy as np
import pandas as pd

  • making TSV file - (6)
A
  • This code snippet demonstrates the process of creating a TSV (Tab-Separated Values) file for defining experimental conditions and their timing in an fMRI experiment.
  • It involves defining the paths for the fMRI data (subject_data_path) and the TSV file (tsv_path).
  • Duration values for each stimulus block (hand_duration and vision_duration) are specified in seconds.
  • Onset times for the ‘hand_movement’ and ‘vision’ conditions are generated using lists created with the range() function, representing the start times of each stimulus block.
  • Dataframes (hand_movement_df and vision_df) are created for each condition, specifying columns such as ‘trial_type’, ‘onset’, and ‘duration’.
  • These dataframes are then concatenated into a single dataframe (conditions_df) representing all experimental conditions.
  • Finally, the dataframe is saved to a TSV file (tsv_path) using the to_csv method, with ‘\t’ as the separator and index=False to omit row indices.
62
Q

Second step of doing first-level analysis in nilearn -overall:

loading fMRI data and TSV file and do GLM

(9)

A
  • This code snippet continues the analysis process after creating the TSV file and loading fMRI data.
  • The TSV file containing experimental conditions and timing information is loaded into a pandas DataFrame (events) using pd.read_table.
  • Functional MRI (fMRI) data is loaded from the specified path (subject_data_path) using Nilearn’s load_img function.
  • The repetition time (TR) of the fMRI data is determined either directly or from the image header.
  • A FirstLevelModel object is instantiated to set up the first-level GLM analysis, specifying parameters such as TR and high-pass filtering.
  • The GLM model is fitted to the fMRI data using the fit method of the FirstLevelModel object, with the events DataFrame supplied as input.
  • The design matrix of the fitted GLM model is obtained from the design_matrices_ attribute.
  • Contrasts are defined and computed using the compute_contrast method of the FirstLevelModel object, producing beta maps for specific stimuli.
  • Finally, beta maps for motor and vision stimuli are visualized using Nilearn’s plot_stat_map function, with the ‘mosaic’ display mode.
63
Q

print(first_level_model.hrf_model) - (2)

A

This code snippet prints the hemodynamic response function (HRF) model used by the first-level model.

By default, Nilearn’s FirstLevelModel uses the Glover slow hemodynamic response function.

64
Q

Fit L1 GLM:
first_level_model.fit(fmri_img, events=events) - (2)

A

This code snippet fits the first-level GLM model to the fMRI data and event information using the .fit method of the FirstLevelModel object.

Upon calling this function, Nilearn performs a full L1 GLM analysis.

65
Q

Installing Nilearn - (2)

A

Nilearn can be installed using pip, a package manager for Python.

To install Nilearn, you can run the command ‘pip install nilearn’ in your terminal or command prompt. A

fter installation, you can import Nilearn in your Python scripts to use its functionalities for neuroimaging data analysis.

66
Q

ModuleNotFoundError - (2)

A

A ModuleNotFoundError occurs when Python cannot find a module required by the script.

In this case, the error likely occurred because the Nilearn module was not installed in the Python environment.

67
Q

Q1
I am using nilearn to run a GLM on a nifti file fmri_img.

I have generated a events structure events already from a .tsv file output while the scan ran. The scanner collected individual data volumes (64x64 inplane resolution, 40 slices) once every 3s. The code I am running to perform the GLM is this:

The code runs but the resulting contrast maps seem to show only noise. I was expecting a large and reliable effect and it is not there. What error have I have made?

A

The TR of the stimulus was 3s but you have specified it as 5s in the model.

68
Q

Q2
I am generating a .TSV file to store timing information for a nilearn analysis. The experiment had two visual stimuli: one on the left and one on the right. On the left, the first stimulus appeared at 6s and the stimulus duration was 10s. On the right the offset was 0s and the duration was 12s. The TR of the experiment was 2s and the scanner ran for 160 TRs.

Part of the TSV generation code looks like this:

A

The durations have been specified in TRs but they should be in seconds. They are therefore half as long as they should be.

69
Q

Explain what will happen in overall experiment - (3)

A

For the left visual flicker condition:
- The stimuli start appearing at 6 seconds into the experiment and alternate every 24 seconds (12 blocks), with 12 seconds of flickering followed by 12 seconds without flickering.

For the right visual flicker condition:
- The stimuli start appearing immediately at the beginning of the experiment (0 seconds) and alternate every 24 seconds (12 blocks), with 12 seconds of flickering followed by 12 seconds without flickering.

The stimulus on th left lags that on the right by 6s.

70
Q

Q4
I am about to start an analysis using nilearn. I run this code but get error message

What has probably gone wrong and how do I fix it?

A

You have not installed the ‘nilearn’ module. Use pip install nilearn to install it.

71
Q

Q5
If I have three different stimulus event types in my .TSV file, and I set the high_pass parameter to 0.0001 in a 320s experiment (as above), how many columns do I expect to see in the design_matrix?

A

Four: Three columns for the events and 1 for the constant amplitude. The very low ‘high_pass’ parameter means you will not model any slow drifts.