Session 11 - MEG MNE Flashcards

1
Q

What is MNE?

A

a Python module for analysing MEG and EEG data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Many of the code below will run in Spyder for example, - (2)

A

the interactive plots are not interactive in Colab.

We will try to run our code in Spyder today but use Colab for ‘testing’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Previously, we have learned to install additional python modules using

A

‘pip’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

To install MNE on Google Colab or Windows/Spyder machines, we enter:

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

To install MNE on Spyder, you type this (‘pip install mne’) on the - (2)

A

in the terminal to the bottom right.

Remember to ‘restart kernel’ after you do this. You might also have to set the Graphics output type to ‘QT - to ensure plots in Spyder are interactive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

MEG data come in a variety of formats. Most of the existing data which is available at YNiC is from a

A

4-D system like this:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

All the file paths in this section is relative to s8_meg directory in the material directory, so need to change the directory you are working in Spyder or replace the full paths such as.. C:\Users\gs1211\ . . .

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

This code is saying that:

A

This code fetches a specific branch (s8_meg) from a Git repository (https://vcs.ynic.york.ac.uk/cn/pin-material.git) and clones it into the current directory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

We can check git repository is there by saying:

A

This command lists the contents of the directory s8_meg within the pin-material directory, showing details such as permissions, ownership, size, and modification time, with the most recently modified files appearing first.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

We have loaded in the MNE python module. And We have loaded in the MNE python module. And we have also git-cloned some MEG data. Now we can use a

A

python script to look at the data:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

First we want to change the directory where s8_meg live by saying the following code - (2):

A

This code imports the Python libraries ‘os’ and ‘mne’.

Then, it changes the current working directory to ‘/content/pin-material/s8_meg/’ using the ‘os.chdir()’ function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

For spyder to change directory we can change

os.chdir(‘/content/pin-material/s8_meg/’) to..

A

You need to change this on Windows - something like C:\Users\aw890\pin-material\s8_meg

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the difference between the use of backslashes () and forward slashes (/) in file paths?

A

Backslashes () are typically used in Windows file paths, while forward slashes (/) are used in Unix-based systems like Linux or macOS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the main difference between the organization of MEG data and fMRI data?

A

MEG data is organized by sensors, whereas fMRI data is organized by voxels in the brain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How are sensors positioned in MEG data collection?

A

Sensors in MEG are fixed in space within a helmet, detecting brain activity at a distance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the purpose of registration in MEG data analysis?

A

Registration ensures accurate positioning of sensors relative to the head and brain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the role of a 3D digitizer like a ‘Polhemus’ in MEG data analysis?

A

A 3D digitizer like a ‘Polhemus’ measures head and facial features to facilitate accurate sensor positioning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the next step after examining position data in an old MEG dataset?

A

The subsequent step involves analyzing individual MEG timecourse data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the purpose of the following code snippet?

A

This code reads MEG data from a file and extracts different parts of the MEG dataset for analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What function is used to read the MEG data from a file from mne?

A

The mne.io.read_raw_bti() function from the MNE module is used to read the MEG data from the specified file.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does this line of code mean?

raw = mne.io.read_raw_bti(‘./R1025_P1119a_4/c,rfDC’) - (2)

A

This line of code reads MEG data from a file of participant R1025_P1119a_4 using the read_raw_bti function from the MNE module.

The specified file path is ‘./R1025_P1119a_4/c,rfDC’.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is output of

raw = mne.io.read_raw_bti(‘./R1025_P1119a_4/c,rfDC’

print(raw)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What does the raw.info object contain?

if raw = mne.io.read_raw_bti(‘./R1025_P1119a_4/c,rfDC’

A

The raw.info object contains basic information about the MEG data, such as the number of channels and device information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Outputting

print(raw.info)

raw - loaded mne data

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

The raw object containing meg data loaded via mne

has a member called info which is.. - (2)

A

his is an mne Info structure which behaves something like a dictionary.

It has a number of elements which refer to details about the scan (rather than the raw data).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What does this output show of raw.info? - (7)

A

Going from top to bottom, we can see that at present there are no bads set: i.e. we have not configured bad channels yet.

There is a list of channel names in the ch_list attribute, and the chs attribute is actually a list of all channel details; when printed, it simply shows a summary of what types of channels we have.

On our MEG system system, the main MEG channels are referred to as A1 to A248. When loading the data into MNE, these are mapped into MEG 001 to MEG 248; the reference channels are also remapped in the same way. We therefore need to remember to refer to the channel names in the way that MNE expects.

The next few lines refer to the transform which is calculated based on the “Coil-on-Head” calculations and the movement into the standard space that MNE uses: we ignore this for now.

Under dig, we see that we have 1558 digitisation points: these come from the Polhemus head digitisation system which is used with the 4D MEG scanner.

We can also see that the data were filtered between 0 and 339.1Hz.

We then get some information about the date of the scan, the total number of channels and then the sfreq: the sampling frequency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What valuable information does raw.info.keys() provide? - (2)

A

By printing the keys available in the raw.info structure, we gain insight into the metadata associated with the MEG data being analyzed.

This metadata may include information such as acquisition parameters, stimulus information, device configuration, channel information, data processing history, and more. .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What does this following code snippet do? - (4)

A

Accessing the keys in the raw.info structure

This code snippet extracts digitisation data from the MEG dataset, storing it in a variable named ‘dig’.

It then prints the type of data contained in ‘dig’ and displays the first few elements of the digitisation data, providing insight into the location of critical head structures.

Finally, it prints the total number of elements in the digitisation data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What does the line dig = raw.info['dig'] accomplish? - (2)

A

This line extracts digitisation data from the MEG dataset, specifically the set of 3D points in space that indicate the location of critical head structures. I

t stores this data in a variable named ‘dig’.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What does the code print(type(dig)) do?

A

The code print(type(dig)) prints the type of data contained in the variable ‘dig’, which is <class ‘list’>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What information does the line print(dig[0:5]) provide? - (2)

A

The line print(dig[0:5]) displays the first five elements of the digitisation data stored in the variable ‘dig’.

These elements represent 3D points indicating critical head structures, such as the nasion and inion as well as extra points when the stylus stroked the head structure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What does the code print(len(dig)) show?

A

The code print(len(dig)) prints the total number of elements in the digitisation data, providing insight into the overall size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Explain the output - (3)

[<DigPoint | LPA : (-74.6, 0.0, 0.0) mm, ….., Extra #2 : (-19.5, 66.3, 39.6) mm…]

A

We see that the digitisation points are stored as a list of objects of type DigPoint.

If we want to plot these data, we will need to extract them.

We see that the first three are different to the rest which seem to be called Extra X.

Those first three points define the coordinate system and although they are both interesting and useful, we don´t need them right now. Instead we will focus on the Éxtra’points which contain the shape of the head.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What does the code ex_1 = dig[3] do? - (2)

A

The code ex_1 = dig[3] extracts a digitisation point from the digitisation data stored in the variable ‘dig’.

Specifically, it selects the digitisation point after the first three points and assigns it to the variable ‘ex_1’.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What does the code print(type(ex_1)) reveal?

A

The code print(type(ex_1)) prints the class of the object stored in the variable ‘ex_1’, indicating that it belongs to the class mne_fiff.digitisation.DigPoint, representing a digitisation point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What does the code print(dir(ex_1)) accomplish? - (2)

A

The code print(dir(ex_1)) lists all methods and attributes associated with the digitisation point stored in the variable ‘ex_1’.

However, the output can be cluttered, making it difficult to discern relevant information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What does ([x for x in dir(ex_1) if not x.startswith(‘_’)]) do? - (2)

A

The code [x for x in dir(ex_1) if not x.startswith('_')] produces a cleaner version of the list by filtering out methods and attributes that start with an underscore (‘_’).

This list comprehension generates a new list containing only the methods and attributes accessible for the digitisation point, making it easier to navigate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What does the method startswith() accomplish? - (2)

A

The method startswith() checks whether a string starts with a specified prefix.

It returns True if the string starts with the prefix, and False otherwise.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

These are all

A

lot of private methods (things which start with an underscore “_something”) which clutter up our view, so we decide to exclude them using a list comprehension.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

what does this mean:

[x for x in dir(ex_1) if not x.startswith(‘_’)]

A

It says that we are going to loop over every item in dir(ex_1) and we will keep it (x for x in) if it doesn’t start with an underscore: (if not x.startswith(‘_’)).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What does this code do? - (2)

A
  1. It first prints the data type (float) and the value of the sampling frequency (sfreq) associated with the MEG data.
  2. Then, it prints the data type (list) and the list of channel names (ch_names) in the MEG data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Output of this code

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What does the code print(ex_1.keys()) accomplish?

A

The code print(ex_1.keys()) prints out the keys associated with the digitisation point ex_1,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What does the code print(ex_1['r']) display?

A

The code print(ex_1['r']) prints out the values of the ‘r’ key associated with the digitisation point ex_1, which represent the x, y, and z coordinates of the point in a tuple where Polheums is pointing to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What does the code print(ex_1['ident']) reveal? - (2)

A

The code print(ex_1['ident']) displays the value associated with the ‘ident’ key for the digitisation point ex_1.

In this case, it indicates the identification of the point, which might not hold significant information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What information does the code print(ex_1['kind']) provide? - (2)

A

The code print(ex_1['kind']) reveals the type of point represented by the digitisation point ex_1.

It indicates that the point is of type ‘FIFFV_POINT_EXTRA’, which suggests that it is an extra point originating from the Polhemus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What does the code print(ex_1['coord_frame']) indicate? - (2)

A

The code print(ex_1['coord_frame']) displays the coordinate frame associated with the digitisation point ex_1.

In this case, it indicates that the coordinates are referenced to the head coordinate system (‘FIFFV_COORD_HEAD’).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What is the purpose of the following code snippet? - (2)

A

The purpose of this code snippet is to extract digitisation points from the ‘digitisation’ data stored in the variable ‘dig’.

It specifically selects points originating from the Polhemus device and stores their coordinates in a numpy array.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What does the line if pt['kind'] == mne.io.constants.FIFF.FIFFV_POINT_EXTRA: check for? - (2)

A

The line if pt['kind'] == mne.io.constants.FIFF.FIFFV_POINT_EXTRA: checks whether a digitisation point is of the type ‘FIFFV_POINT_EXTRA’, indicating that it originated from the Polhemus device.

These are the types (or ‘kind’) of things we want! They are ‘extra’ because they come from the Polhemus and not the scanner hardware itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What does the code out_pts.append(pt['r']) accomplish?

A

The code out_pts.append(pt['r']) extracts the coordinates of digitisation points in a tuple of (x,y,z) originating from the Polhemus device and appends them to a list called ‘out_pts’. - specifcying location of digitsation point in 3D space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What does the line print(f'Here are the dimensions of the point array: {out_pts.shape}') display? - (3)

A

prints the dimensions of the numpy array containing the digitisation points.

Here are the dimensions of the point array: (1555, 3)

So, we finish with 1555 Polhemus points. We started (back in the info structure) with 1558 points, and found that 3 of them were “cardinal points”, so this makes sense.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

The three reference points typically found in digitisation data that we don’t want are: - (3)

A

Nasion: The point at the bridge of the nose.

Left preauricular: The point in front of the left ear.

Right preauricular: The point in front of the right ear.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What does this code do?- - (3)

A

This code snippet utilizes numpy and matplotlib to create a 3D scatter plot of digitisation points stored in the numpy array ‘out_pts’.

It first creates a figure and an axis with 3D projection, extracts the x, y, and z coordinates from ‘out_pts’, and then plots these points as a scatter plot in 3D space using the ‘scatter’ function.

Finally, it displays the plot.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

from mpl_toolkits.mplot3d import axes3d

explain - (2)

A

This line imports the axes3d module from the mpl_toolkits.mplot3d package.

The axes3d module provides functions and classes for creating and manipulating 3D plots in Matplotlib.

55
Q

Output of this code

A
56
Q

At this point we want to interact with the plot - maybe use spyder but we can do 3D plotting in the browser like this:

A

Code:
import plotly.express as px

fig = px.scatter_3d(x=x, y=y, z=z, opacity=.5)
fig.show()

57
Q

What does this code do? - (3)

A

This code snippet uses Plotly Express to create a 3D interactive scatter plot.

It takes the x, y, and z coordinates of the points and creates a 3D scatter plot with a specified opacity level.

The fig.show() function then displays the interactive plot.

58
Q

What does this code do? - (2)

A

This code snippet uses Plotly Graph Objects to render a surface plot.

It creates a mesh plot using the x, y, and z coordinates of the points, setting the color to gray and opacity to 1.

The fig.show() function then displays the rendered surface plot.

59
Q

What does the code raw.plot_sensors(kind='3d')
plt.show() do? - (3)

A

This code utilizes the plot_sensors function from the MNE library to create a 3D plot showing the relative positions of all sensors in the MEG data.

Each sensor’s position is represented in three dimensions.

60
Q

What does the code raw.plot_sensors() do?
plt.show() - (2)

A

This code uses the plot_sensors function from the MNE library to create a 2D plot showing the relative positions of all sensors in the MEG data.

Each sensor’s position is represented in two dimensions.

61
Q

What library is imported with the statement from matplotlib import figure?

A

The statement from matplotlib import figure imports the figure module from the Matplotlib library. This module provides the functionality to create and customize figures for plotting.

62
Q

What does the code raw.plot() do? - (2)

A

The code raw.plot() utilizes core routines from the MNE library to create a plot showing the raw MEG data.

This plot displays the burst of energy detected by each sensor across time, providing an overview of the MEG data.

63
Q

What observation is made about the behavior of raw.plot() on certain platforms like Colab?

A

On certain platforms like Colab, the interactive window might be plotted twice when using raw.plot(), and it may not be interactive as expected.

64
Q

What does the comment “# producing the burst of energy of heart at every sensor across time - something we would want to remove” suggest about the plot generated by raw.plot()? -(2)

A

The comment suggests that the plot generated by raw.plot()displays cardiac artefacts - not part of brain and the part of heart

These artifacts are undesirable and may need to be removed from the data.

Also see very fast ripples which are called line noise- noise coming from electromagnetic enviroment e.g., light from MEG room and taket them out as well

65
Q

What does the code raw.pick_types(meg=True).plot(butterfly=True) accomplish?

A

raw.pick_types(meg=True).plot(butterfly=True) selects only the MEG channels from the raw data and plots them in a ‘butterfly’ format

In the butterfly plot, each channel’s signal is displayed along a single line, allowing for easy comparison of signals across channels

This format is particularly useful for visualizing MEG data, as it helps to identify patterns and abnormalities in the signals.

66
Q

What observation is made about the noise in the data shown below the code snippet? - (2)

A

The comment suggests that there is a significant amount of noise present in the data shown below the code snippet.

This noise can obscure the underlying signal and may need to be filtered or removed for further analysis.

67
Q

Need to also import:

from matplotlib import figure for

A

MEG butterfly plot

68
Q

How do you plot 50 seconds worth of MEG data using the raw.plot() function? - (2)

A

To plot 50 seconds worth of MEG data, use the duration option in the raw.plot() function: raw.plot(duration=50).

This will zoom out to display a longer segment of the data.

69
Q

How do you plot 1 second of MEG data starting at 6 seconds using the raw.plot() function? - (2)

A

To plot 1 second of MEG data starting at 6 seconds, use the start and duration options in the raw.plot() function:

raw.plot(start=6, duration=1). This will zoom in to display a short, specific segment of the data.

70
Q

What does raw.plot(start=6, duration=1) plot? - (2)

A

plots a standard time series of the MEG data for all selected channels, showing the data from the 6th second to the 7th second.

Each channel’s data is displayed separately, rather than overlaid as in a butterfly plot

71
Q

What effect does plotting 1 second of data starting at 6 seconds have on the visualization? - (2)

A

Plotting 1 second of data starting at 6 seconds zooms in on a short, specific segment of the data, allowing for finer detail to be seen.

This can help in identifying and analyzing individual events, such as heartbeats picked up by a single sensor.

72
Q

How can you mark a channel as bad in the interactive MEG data plot in Spyder, and how can you check which channels are marked as bad? - (2)

A

You can mark a channel as bad by selecting it in the interactive plot.

After closing the plotting window, you can check which channels are marked as bad by running print(raw.info['bads']).

The marked channels will be listed in raw.info['bads'].

73
Q

How do you programmatically clear the list of bad channels in an MNE raw dataset?

A

You can clear the list of bad channels by setting raw.info['bads'] = [], which will empty the list and unmark any previously marked channels as bad.

74
Q

Why might you want to “chop” MEG data into trials or epochs, and where is the information for doing this usually stored? - (2)

A

You might want to chop MEG data into trials or epochs to analyze specific segments of the data that correspond to experimental conditions or stimuli.

The information for doing this is usually stored on a trigger or stimulus channel

75
Q

What is the purpose of the following code snippet in the context of MEG data analysis?

A

This code snippet isolates the trigger (stimulus) channels from the raw MEG dataset, prints the names of the remaining channels, and plots the trigger channel over a duration of 60 seconds to visualize the event trace.

76
Q

What does the output of this code show? - (2)

A

The typical output would display the names of the channels present in the trig_chan dataset, such as “STI 014” and “STI 013”.

Additionally, it would show a plot of the trigger channel over a duration of 60 seconds, with the x-axis representing time (in seconds) and the y-axis indicating the presence of trigger events.

Each line on the plot corresponds to a different trigger event recorded in the MEG data.

77
Q

What important point should be remembered when using the pick_types function in Python? - (2)

A

When using the pick_types function to select specific channels from a dataset, it’s crucial to remember that this function operates on the dataset in place unless explicitly applied to a copy of the dataset.

Failure to make a copy before using pick_types can result in unintentional modifications to the original dataset. (the raw existing dataset)

78
Q

What is the potential consequence of not making a copy of the dataset before using the pick_types function? - (2)

A

The potential consequence of not making a copy before using pick_types is that any modifications made to the selected channels will directly affect the original raw dataset.

This can lead to unexpected changes in the data and unintended side effects in subsequent analyses.

79
Q

How can you explicitly make a copy of a list or dataset in Python? - (2)

A

To explicitly make a copy of a list or dataset in Python, you can use the .copy() method/statement.

For example:
This ensures that b is a separate copy of a, allowing independent modifications without affecting the original list or dataset.

80
Q

What does the pick_types function in MNE-Python allow us to do? - (2)

A

The pick_types function in MNE-Python allows us to create a copy of the dataset containing only certain channels of interest.

This function helps in selecting specific types of channels, such as stimulus channels or MEG channels, for further analysis.

81
Q

What is the purpose of using the find_events function in MNE-Python? - (2)

A

The find_events function in MNE-Python is used to extract event timing information from the dataset.

It searches for triggers or events recorded in the data and returns an array containing the timing and event code for each detected event.

82
Q

What information does the numpy array returned by the find_events function contain? - (4)

A

The numpy array returned by the find_events function contains three columns:

  1. The start index in time.
  2. An irrelevant column (not used in the current context).
  3. The event code representing the type of event detected.
83
Q

What does the shape (600, 3) of the events array indicate? - (2)

A

The shape (600, 3) of the events array indicates that there are 600 events detected in the dataset.

Each event is represented by three pieces of information: the start index in time, an irrelevant column (not used), and the event code.

84
Q

What does the event code 4196 signify in the context of the MNE-Python dataset? - (2)

A

In the context of the MNE-Python dataset, the event code 4196 represents a specific type of event or trigger recorded in the data.

It could correspond to a particular stimulus presentation, experimental condition, or other predefined event in the experimental paradigm.

85
Q

What is the purpose of the np.diff function in NumPy? - (2)

A

The np.diff function in NumPy is used to compute the difference between consecutive elements in an array. I

t calculates the gap or change between adjacent values in the array.

86
Q

How can the np.diff function be used to analyze trigger events in MNE-Python datasets? - (2)

A

In MNE-Python datasets, the np.diff function can be applied to the timestamps of trigger events to compute the time differences between consecutive events.

This helps in analyzing the inter-trigger intervals or gaps in milliseconds.

87
Q

What does the function np.diff([1,2,3,4,5]) return and why? - (3)

A

The function np.diff([1,2,3,4,5]) returns an array [1,1,1,1].

This is because it computes the difference between consecutive elements in the array.

In this case, the difference between 2-1=1, 3-2=1, 4-3=1, and 5-4=1, resulting in an array of consecutive 1s representing the gaps between the original elements.

88
Q

Why is it preferable to use descriptive names like ‘visual’ instead of event codes like ‘4196’? - (2)

A

Using descriptive names like ‘visual’ instead of event codes like ‘4196’ makes the code more readable and understandable.

It provides context and clarity to the purpose of the event, enhancing interpretability.

89
Q

What does the event_dict = {‘visual’: 4196} line of code accomplish? - (2)

A

The event_dict = {‘visual’: 4196} line creates a dictionary that maps descriptive names (‘visual’) to event codes (‘4196’).

This allows for more intuitive labeling of triggers in the plot.

90
Q

What does the plt.figure(figsize=(15,3)) line of code do? - (2)

A

The plt.figure(figsize=(15,3)) line sets the size of the figure for plotting events.

It specifies a figure size of 15 inches in width and 3 inches in height, ensuring appropriate visualization.

91
Q

What does the mne.viz.plot_events(events, sfreq=raw.info[‘sfreq’], event_id=event_dict, axes=h) function call do? - (2)

A

The mne.viz.plot_events() function plots the time-series of events.

It takes the events data, the sampling frequency (sfreq), the event dictionary (event_dict) for labeling, and the axes (axes=h) for plotting.

92
Q

What does output of this code show? - (4)

A

pot the time-series of events. We see that there were some gaps in events, presumably to give the participant a short break.

. Each line corresponds to a specific event ID, in this case, ‘4196’. These lines are spaced apart with gaps indicating periods where no trigger events occurred.

Bunch of them up to 110s then a gap…

Participant se flash on screen then a gap and then flash

93
Q

What does the blue lines represent when plotting the original data with events marked on?| - (2)

A

The blue lines represent the events marked on the plot.

Each line corresponds to a specific event, and the position along the time axis indicates the timing of the event occurrence.

94
Q

What does the plot() function do in the context of meg_chan.plot(butterfly=True, clipping=None,start=100,duration=1)?

A

It plots the MEG data in a butterfly format, where each sensor’s data is represented on a separate line for better visualization.

95
Q

What does raw.copy().pick_types(meg=True, stim=False) do?

A

It selects only the MEG channels from the raw data while excluding the stimulus channels.

96
Q

Why is it important to high-pass filter the data before running ICA?

A

High-pass filtering removes slow components like drifts from the data, which can otherwise lead to artefactual components in the ICA decomposition.

97
Q

What does the random_state=97 parameter in the ICA instantiation achieve?

A

It ensures reproducibility of the ICA decomposition across different runs of the algorithm, which is particularly useful for teaching purposes or when consistency in results is desired.

98
Q

What does the output “Filtering raw data in 1 contiguous segment” indicate?

A

It indicates that the raw data is being filtered to remove very slow components, such as drifts, before applying ICA.

99
Q

How many ICA components were requested in the code?

A

Fifteen ICA components were requested by setting n_components=15 in the ICA instantiation.

100
Q

does it do low or high apss fitlering in this code?: - (2)

A

This code performs high-pass filtering. This is evident from the parameter l_freq=1 in the filter method, which specifies the lower frequency cutoff for the high-pass filter.

The absence of h_freq parameter means there is no upper frequency cutoff, indicating that only high-pass filtering is applied.

101
Q

What is the low-frequency cutoff? - (2)

A

The low-frequency cutoff is 1 Hz. This is specified by the parameter l_freq=1 in the filter method.

By applying a high-pass filter with a cutoff frequency of 1 Hz, it removes very slow components, such as drifts, from the data

102
Q

What does the function ica.plot_components() do?

A

It plots the component topographies, allowing us to visually inspect the spatial patterns of the independent components.

103
Q

Output of ica.plot_components() explained: (2)

A

We are looking for artifacts coming from two places: Eye movements (which will show up either side of the front of the head) and cardiac (heartbeat) artifacts which come from far far away and so might just smear out over the whole brain.

We are immediately suspicious of components ICA000 looks like it originates very low down in the brain…. sources like this could be real data from deep brain structures but often they are actually interference from the electrical activity of the heart (which is firing a big ‘pulse’ every second or so). 0005 is pretty much centred on the eyes… TBH I’m not excited about 0002 either but let’s take a look…

104
Q

What does the function ica.plot_sources(raw, show_scrollbars=False) do?

A

It visualizes the timecourse of the sources identified by ICA, allowing us to inspect potential artefacts such as cardiac activity, eye movements, or breathing patterns.

105
Q

What does it indicate if a component isolates a cardiac-style trace? e.g., , ICA000 looks like it has isolated a cardiac-style trace.

A

t suggests that the component may be capturing electrical activity related to the heartbeat, which is considered an artefact in MEG and EEG data analysis.

106
Q

How can we identify potential artefacts such as eye movements or breathing patterns in the timecourse of the sources?

A

By visually inspecting the traces generated by ica.plot_sources(), we can observe characteristic patterns that resemble known artefacts, such as rapid fluctuations indicative of eye movements or periodic patterns corresponding to breathing.

ICA005 also appears to have isolated something which may be eye movement. Looking at the traces, however, it is also possible that ICA006 has done the same. ICA010 also appears to be some form of breathing artefact.

107
Q

Question: How do we identify which components to remove in ICA?

A

By visually inspecting the timecourse of the sources, we look for components that isolate artefacts such as cardiac activity, eye movements, or breathing patterns.

108
Q

Which components are decided to be removed in this case, and why?

A

Components 0, 5, 6, and 10 are marked for removal as they appear to isolate artefacts such as cardiac activity, eye movements, or breathing patterns, which we aim to eliminate from the data.

109
Q

What does ica.exclude = [0, 5, 6, 10] do?

ica.exclude = [0, 5, 6, 10]
ica.plot_sources(raw, show_scrollbars=False)

A

This line of code excludes specific Independent Components identified as artefacts from the ICA analysis, marking components 0, 5, 6, and 10 for removal.

110
Q

What does ica.plot_sources(raw, show_scrollbars=False) accomplish? - (2)

A

This function plots the time series of the sources for each Independent Component identified by ICA.

By visualizing the timecourse of each component, analysts can identify artefactual patterns such as cardiac activity or eye movements for further inspection and potential removal.

111
Q

What does the following code snippet do? - (2)

raw.load_data()
ica.plot_overlay(raw, picks=’meg’)

A

This code snippet loads the original raw data and then overlays the cleaned signals obtained after removing the specified Independent Components (ICs) onto the original signals.

It allows for a visual comparison between the original data and the cleaned data, highlighting the impact of IC removal on specific channels.

112
Q

What does the following code snippet - (3) accomplish?

A

This code snippet applies the Independent Components Analysis (ICA) cleaning process to the copied raw data.

The original raw data remains unchanged, while the cleaned data is stored in the reconst_raw variable.

This allows for the comparison between the original and cleaned datasets, preserving the integrity of the original data.

113
Q

What does the following code snippet do, and what does the resulting visualization show? - (5)

A

This code snippet compares the original raw dataset (raw) with the dataset after Independent Components Analysis (ICA) de-noising (reconst_raw).

It selects specific MEG channels specified in the chans list and plots their data side by side.

The visualization titled “Original” displays the raw data, while the visualization titled “Cleaned” displays the data after ICA de-noising.

By comparing the two plots, users can observe the effects of the de-noising process, such as the removal of regular cardiac artifacts and eye movements, as well as any potential loss of variance in the data.

Above is the original raw dataset and below is the dataset after ICA de-noising. You can see that we have removed the regular cardiac artefacts and we have also removed some eye movements (see around 6s in channels 153 and 154).

114
Q

What is the purpose of deleting the original raw data (raw) using del raw? - (2)

A

Deleting the original raw data (raw) using del raw frees up memory resources by removing the data from memory.

This step is taken after the ICA de-noising process to ensure that memory is efficiently managed, especially if the dataset is large.

115
Q

How is the ICA de-noised data saved in the provided code snippet? - (3)

A

The ICA de-noised data is saved to disk in compressed FIFF format using the save() method.

The filename used for saving the data is ‘R1025_P1114_4-raw.fif.gz’.

This ensures that the cleaned data is preserved and can be easily accessed in future sessions.

116
Q

What function can be used to load the saved ICA de-noised data back into memory in future sessions? - ((3)

A

The mne.io.read_raw_fif() function can be used to load the saved ICA de-noised data back into memory in future sessions.

This function takes the filename of the saved data (‘R1025_P1114_4-raw.fif.gz’) as an argument and returns the data as a raw objec

However, it’s important to note that event timings and other related computations may need to be recomputed after loading the data.

117
Q

What is the purpose of filtering the data between 3 and 30Hz? - (2)

A

Filtering the data between 3 and 30Hz helps isolate the frequency band associated with evoked activity, which is the focus of the subsequent analysis.

This filtering step removes high-frequency noise and low-frequency drifts, enhancing the signal-to-noise ratio for further analysis.

118
Q

What precaution is taken before filtering the data? (3 Hz to 30 Hz) - (2)

A

Before filtering the data, the load_data() method is called to ensure that the data is loaded into memory.

This step is essential, especially when the file has just been re-loaded, to ensure that the data is available for filtering.

119
Q

What filtering options are used in the provided code snippet? - (2)

A

A band-pass filter between 3 and 30Hz is applied to the data using an Finite Impulse Response (FIR) filter.

This filter selectively allows frequencies within the specified range to pass through while attenuating frequencies outside the range.

120
Q

How can the original and filtered data be plotted for comparison? - (2)

A

Both the original and filtered data can be plotted using the plot() method.

For the original data, raw.plot(start=6, duration=2) is used, while for the filtered data, filt.plot(start=6, duration=2) is used.

This allows visual inspection of the effects of filtering on the data.

121
Q

Does the provided code snippet apply a low-pass filter to the data? - (2)

Filter between 3 and 30Hz
raw.load_data()
filt = raw.copy().filter(l_freq=3, h_freq=30, n_jobs=-1)

A

Yes, the provided code snippet applies a low-pass filter as part of the band-pass filtering process.

While explicitly mentioning the low-pass filter is not done in the code, the band-pass filter inherently includes both high-pass and low-pass filtering components.

122
Q

Can the filtered data be saved and re-loaded for future use? - (2)

Save the filtered data
filt.save(‘filtered_data.fif.gz’)

Re-load the filtered data
filt = mne.io.read_raw_fif(‘filtered_data.fif.gz’)

A

Yes, the filtered data can be saved using the save() method and re-loaded using the read_raw_fif() function.

This allows for preservation of the filtered data and avoids the need to repeat the filtering step in future sessions.

123
Q

What is the purpose of epoching the data? - (2)

A

The purpose of epoching the data is to segment it into smaller, trial-like segments centered around specific event timings.

This allows for the analysis of neural activity in response to experimental events, facilitating the investigation of event-related potentials (ERPs) or event-related oscillations (EROs).

124
Q

Explain the code - (7) - epoching the MEG data

A

This code epochs the filtered data using MNE.

filt: Filtered data to be epoched.

events: Event timings extracted from the filtered data.

event_id: Dictionary mapping event labels to event codes.

tmin: Start time of each epoch relative to event onset (-0.2 seconds).

tmax: End time of each epoch relative to event onset (0.5 seconds).

preload: Indicates whether to load all data into memory at once (True).

125
Q

What does the output of filt_epochs represent?

A

We have 600 trials (as expected) and each of our epochs runs from -0.2 to 0.5s.

126
Q

After epoching the MEG data, we can individually plot our epochs
…. - (2)

A

The time-axis here is now based on the epoch number rather than in seconds. All of the data in between the epochs has been thrown away

We can now compute an average. As we only have one condition, this is easy. We will, however, do it as if we had multiple conditions ( i.e. specify the condition name) so that you can follow how you would deal with the multiple condition case:

127
Q

What does filt_avg.plot() visualize?

A

The average time-series of epochs with spatial colors and global field power (GFP), allowing for topology plots of sensors.

128
Q

What does filt_avg.plot_topomap() display?

A

Topomap plots of sensor spatial distribution at specific time points, indicating brain activity patterns.

129
Q

What does the output show? - (5)

A

The initial plot will show the average time-series.

Outside the notebooks, you can click and drag on this plot to draw a topology plot of the sensors over a given time-window.

The example above shows this for the first real peak (just before 100ms).

We can see from this that there is a right-lateralised dipole over where we would assume that occipital cortex would be.

This experiment involved showing a visual stimulus to the left-hand visual field, so this is somewhat reassuring.

130
Q

What does filt_avg.plot_joint() visualize?

A

A combined plot showing the average time-series, topomap plots, and sensor data along with spatial distribution at specific time points.

131
Q

How does plot_joint determine the times to plot by default?

A

It guesses based on peaks in the data.

132
Q

How can you override the default times in filt_avg.plot_joint()?

A

By specifying the times argument, like filt_avg.plot_joint(times=[0.08, 0.31]).

133
Q

What is ICA? - (2)

A

Independent Component Analysis (ICA) is a computational method used to separate a multivariate signal into additive, independent components.

It is commonly used in MEG and EEG data analysis to isolate and remove artifacts such as blinks and cardiac signals from the data.