week 5.2 VBM Flashcards

(32 cards)

1
Q

Why is it clinically useful to image and calculate brain volume?

A

to show brain atrophy which can be a marker for disease

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the process/pipeline to achieve atlas-based segmentation?
What is it?

A
  1. non-linear registration of patient’s brain to MNI template
  2. superimpose atlas segments
  3. reverse the registration so atlas-based segments are on patient’s native space

atlas-based segments = template of standard locations of brain areas put into different coloured segments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Normalising factors used in MRI brain volume analysis:
Why do we use them?
How are they used?
What is the result?

A

people have different sized brains -> you can timse volume of a person’s brain/brain areas by the normalising factors to get the normalised volume which is relative to the size of their skull

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a big downside to choosing to do ROI analysis?

A

a-priori hypothesis required: you need to know what to look at
less exploratory (than VBM at least)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the issue with choosing many ROIs for you hypothesis?

A

have to compensate for multiple comparisons
increases chance of false positives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is VBM?
How is it better than using ROI analysis?

A

voxel-based morphometry VBM is a kind of volumetric brain analysis
VBM allows for a more exploratory hypothesis and you don’t need an a-priori hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the five analysis steps in a typical VBM pipeline?

A
  1. Brain extraction
  2. (GM) Tissue segmentation
  3. Templates and registrations(linear+nonlinear)
  4. Smoothing
  5. Statistical testing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the issue with brain extraction?
In a brain extraction, if you are not sure whether voxels should be removed, is it better to remove or leave them?

A

-they are never perfect however it impossible to check every voxel manually so you just check for overall major issues
-generally better to leave these voxels as you can never get them back down the pipeline but you can always remove them later

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Does tissue segmentation involve TPMs?

A

yes tissue probability maps a probabilistic way of doing tissue segmentation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Are images with TPM maps overlaid quantitative or qualitative images?

A

they become QUANtitative images as TPMs are quantitative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What comes after step 2: (GM) tissue segmentation? What happens during this step in parts 0, 1 and 2?

Why is this step key for a VBM experiment?

A

-3. templates and registrations.
part 0: approx registration of raw images to MNI template using 12DOF linear and then non-linear registration to get finer details
part 1: (using the steps from the registration in part 0) raw GM TPMs are non-linear COregistrated to MNI template. These coregistered GM TPMs are averaged -> study specific GM template
part 2: raw GM TPMs are non-linear COregistered to the study specific GM template with modulation -> all GM TPMs in a shared, study-specific space

-VBM experiment needs all images to share same image space. if we examine the “same” voxel -> the registered TPM value reflects the quantity of GM/other tissue, per-participant, for the same anatomical point. -> allows voxelwise analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the issue with step 3 of VBM pipeline?

A

step 3: template and registration = each registration introduces error and noise (include least no. of registrations)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The more different the image you want to register and the template/target image is, what happens to your registrations?
What is the issue with that?

A

more different -> more complex registrations and more registrations

more registrations -> introduces more error and noise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the solution to decreasing error/noise from registration in step 3?

A

-making a study specific template which is average of all our participant brain scans

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Do you need to MNI152 to make a study specific template?

A

yes MNI152 is still used to create the study specific template

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does part 0 of step 3: templates and registration entail?

A

part 0 of step 3: approximate registrations using linear registration of all patients T1 whole brain scans to the MNI152 template. Then, non-linear registration to get the finer details still using MNI152

17
Q

What must you be weary of when making a registration template for groupwise analysis VBM experiment?
What is the solution to this issue?

A

the proportion of participants in the control and the patient/experimental group. if you have 100 crtl. and 50 exp., if you make an average brain template of all -> template will be skewed to look more like the crtl.

tell FSL to randomly cut out 50 of the crtl group so the groups are balanced

18
Q

What is MNI152?

A

Montreal Neurological Institute (MNI) template most used in neuroscience

19
Q

What is the MAIN issue with using VBM when investigating brain atrophy?
What is the solution to this issue?
How is this solution achieved?

A

-disrupts the actual volume of tissues in the native image to fit templates
-modulation! make voxel values proportionate to how much registration transformation has happened
- save a deformation maps: map which show how much each voxel was expanded/contracted compared to original image
multiply results/transformed images by deformation map -> shows true degree of atrophy

20
Q

Why are deformation maps ESSENTIAL when investigating brain atrophy using VBM?

A

must use deformation maps to correct for registration to the templates to show initial degree of atrophy BUT in the same space ->
Creates modulated TPMs which are used in FINAL stats analysis
can’t do VBM study without it

21
Q

In short, what happens during step 3: templates and registration all parts?

What is the difference between the selection of scans used parts 1 and 2 in step 3?

A

part 0: approx registration of raw images to MNI template using 12DOF linear and then non-linear registration to get finer details
part 1: native space GM TPMs non-linear coregistered to the MNI152 template -> GM TPMs in MNI152.
Mathematical mean taken of all these -> study-specific GM template (the weirdly smooth one)
part 2: native space GM TPMs non-linear coregistered to study-specific GM template
plus multiplied by deformation map
-> all GM TPMs in a shared, study-specific space

For groupwise analysis:
part 1 is a curated selection of native space GM TPMs to combat the balancing of the no. of images in each group (crtl and exp.)
whereas part 2, ALL native space GM TPMs are used

22
Q

Why do we smooth all brain MRI data in general?

A

to boost signal to noise ratio

23
Q

What is smoothing for VBM?

A

averaging voxel values with their neighbouring values -> to preserve the direction of the signal (the effect of brain atrophy) while smoothing random noise out of the data.

24
Q

What is the issue with too much smoothing for VBM atrophy experiments?

A

if you have small regions of atrophy, their signal will be quickly smoothed out

25
How does FSL decide how much smoothing to do in a VBM atrophy study?
FSL has three degrees of smoothing: these are called sigma values of 2, 3 or 4. The larger the sigma value, the more smoothing. However FSL runs quick pilot statistics on all three sigma values but you inspect these pilot results and decide which sigma has the best signal to noise ratio.
26
What is voxelwise analysis
performing a statistical test for every voxel co-ordinate across all participants
27
What is RANDOMISE? How is do you improve the accuracy of the above?
a FSL tool which runs non-parametric permutation-based statistical tests on any statistical design (eg groupwise, correlation) the more permutations, the more accurate your statistics are
28
What is the end result of step 5? What is the statistical issue of this? How is this statistical issue solved?
step 5: statistical analysis -> final image in the template space where each voxel value is now the pvalue of a stats test this is a multiple comparisons nightmare -> 1000s of voxel to correct for (gives random false positives) a final correction called cluster enhancement is applied to suppress voxels which are randomly positive as compared to their neighbours and enhance areas where voxels are consistently significant
29
What do the voxels in the brightly coloured blobs represent on the final images from a VBM study? What does the colour represent?
each voxel is a pvalue of the statistical tests to show regions which are affected/significant usually the brighter the colour (eg orange, blue) -> more significant
30
You get insignificant results and aren't seeing the correlations you want to see. How can you test that if youre not messing the pipeline or the imaging and youre just just not seeing a correlation?
check for correlation between GM volume and age as there is always a highly significant negative for almost every GM region
31
What are the pros and cons of ROI analysis? 1 each
restricted apriori hypothesis so you might miss out significant region but this means higher measurement accuracy and statistical sensitivity for regions examined -> stronger results, higher statistical power
32
What are the pro and cons of global analysis/ VBM?
considers whole brain so nothing is missed but more elaborate image processing a stats analysis introduces more noise -> less statistical power for a given region voxelwise multiple corrections still unideal even after cluster-based corrections and enhancements