week 5.2 VBM Flashcards
(32 cards)
Why is it clinically useful to image and calculate brain volume?
to show brain atrophy which can be a marker for disease
What is the process/pipeline to achieve atlas-based segmentation?
What is it?
- non-linear registration of patient’s brain to MNI template
- superimpose atlas segments
- reverse the registration so atlas-based segments are on patient’s native space
atlas-based segments = template of standard locations of brain areas put into different coloured segments
Normalising factors used in MRI brain volume analysis:
Why do we use them?
How are they used?
What is the result?
people have different sized brains -> you can timse volume of a person’s brain/brain areas by the normalising factors to get the normalised volume which is relative to the size of their skull
What is a big downside to choosing to do ROI analysis?
a-priori hypothesis required: you need to know what to look at
less exploratory (than VBM at least)
What is the issue with choosing many ROIs for you hypothesis?
have to compensate for multiple comparisons
increases chance of false positives
What is VBM?
How is it better than using ROI analysis?
voxel-based morphometry VBM is a kind of volumetric brain analysis
VBM allows for a more exploratory hypothesis and you don’t need an a-priori hypothesis
What are the five analysis steps in a typical VBM pipeline?
- Brain extraction
- (GM) Tissue segmentation
- Templates and registrations(linear+nonlinear)
- Smoothing
- Statistical testing
What is the issue with brain extraction?
In a brain extraction, if you are not sure whether voxels should be removed, is it better to remove or leave them?
-they are never perfect however it impossible to check every voxel manually so you just check for overall major issues
-generally better to leave these voxels as you can never get them back down the pipeline but you can always remove them later
Does tissue segmentation involve TPMs?
yes tissue probability maps a probabilistic way of doing tissue segmentation
Are images with TPM maps overlaid quantitative or qualitative images?
they become QUANtitative images as TPMs are quantitative
What comes after step 2: (GM) tissue segmentation? What happens during this step in parts 0, 1 and 2?
Why is this step key for a VBM experiment?
-3. templates and registrations.
part 0: approx registration of raw images to MNI template using 12DOF linear and then non-linear registration to get finer details
part 1: (using the steps from the registration in part 0) raw GM TPMs are non-linear COregistrated to MNI template. These coregistered GM TPMs are averaged -> study specific GM template
part 2: raw GM TPMs are non-linear COregistered to the study specific GM template with modulation -> all GM TPMs in a shared, study-specific space
-VBM experiment needs all images to share same image space. if we examine the “same” voxel -> the registered TPM value reflects the quantity of GM/other tissue, per-participant, for the same anatomical point. -> allows voxelwise analysis
What is the issue with step 3 of VBM pipeline?
step 3: template and registration = each registration introduces error and noise (include least no. of registrations)
The more different the image you want to register and the template/target image is, what happens to your registrations?
What is the issue with that?
more different -> more complex registrations and more registrations
more registrations -> introduces more error and noise
What is the solution to decreasing error/noise from registration in step 3?
-making a study specific template which is average of all our participant brain scans
Do you need to MNI152 to make a study specific template?
yes MNI152 is still used to create the study specific template
What does part 0 of step 3: templates and registration entail?
part 0 of step 3: approximate registrations using linear registration of all patients T1 whole brain scans to the MNI152 template. Then, non-linear registration to get the finer details still using MNI152
What must you be weary of when making a registration template for groupwise analysis VBM experiment?
What is the solution to this issue?
the proportion of participants in the control and the patient/experimental group. if you have 100 crtl. and 50 exp., if you make an average brain template of all -> template will be skewed to look more like the crtl.
tell FSL to randomly cut out 50 of the crtl group so the groups are balanced
What is MNI152?
Montreal Neurological Institute (MNI) template most used in neuroscience
What is the MAIN issue with using VBM when investigating brain atrophy?
What is the solution to this issue?
How is this solution achieved?
-disrupts the actual volume of tissues in the native image to fit templates
-modulation! make voxel values proportionate to how much registration transformation has happened
- save a deformation maps: map which show how much each voxel was expanded/contracted compared to original image
multiply results/transformed images by deformation map -> shows true degree of atrophy
Why are deformation maps ESSENTIAL when investigating brain atrophy using VBM?
must use deformation maps to correct for registration to the templates to show initial degree of atrophy BUT in the same space ->
Creates modulated TPMs which are used in FINAL stats analysis
can’t do VBM study without it
In short, what happens during step 3: templates and registration all parts?
What is the difference between the selection of scans used parts 1 and 2 in step 3?
part 0: approx registration of raw images to MNI template using 12DOF linear and then non-linear registration to get finer details
part 1: native space GM TPMs non-linear coregistered to the MNI152 template -> GM TPMs in MNI152.
Mathematical mean taken of all these -> study-specific GM template (the weirdly smooth one)
part 2: native space GM TPMs non-linear coregistered to study-specific GM template
plus multiplied by deformation map
-> all GM TPMs in a shared, study-specific space
For groupwise analysis:
part 1 is a curated selection of native space GM TPMs to combat the balancing of the no. of images in each group (crtl and exp.)
whereas part 2, ALL native space GM TPMs are used
Why do we smooth all brain MRI data in general?
to boost signal to noise ratio
What is smoothing for VBM?
averaging voxel values with their neighbouring values -> to preserve the direction of the signal (the effect of brain atrophy) while smoothing random noise out of the data.
What is the issue with too much smoothing for VBM atrophy experiments?
if you have small regions of atrophy, their signal will be quickly smoothed out