Session 5 - GLM Flashcards
(15 cards)
GLM vs generalised linear model
= general linear model
- similar to generalised linear model but with a multidimensional matrix as scalar and normally distributed residuals
Challenges of high dimensionality
- brain data with high dimensionality β> try to figure out whether activity increases/decreases
- test for activity changes in one brain loaction at a time β> repeat test systematically at each brain location β> mass variate approach
- some things get left out with this approach
How to find a region that responds stronger during stimulation than during rest
- different intensity during rest and stimulus
Approach:
- divide into stimuli-no stimuli
- collapse green and red values (create histogram on the right) β> variability (noise, artefacts) β> t-test
- paired t-test: increase the sensitivity (remove noise we are not interested in)
- not always possible
How to account for haemodynamic lag?
- response takes a while to increase and decrease β> so it does not work when choosing time windows simply based on stimulus timing
- shift to haemodynamics? due to distribution not possible to simply align stimuli and response β> convolution with HRF introduces not only a time delay but also a smoothing
β> solution is GLM
model-based approach
What should the response look like according to our knowledge of the shape of the HFR and its linearity?
Experimental design β> HRF on expected response model β> fit measured fMRI time series
criteria for linearity of BOLD response
- is needed for GLM
- can be added, multiplied (a* response 1 + b* response 2 = combined response* a+b)
- BOLD response is kind of linear but also not
- long stimulus can be divided into shorter stimuli: get all individual HRFs
Convolution
- how to take individual responses to predict response
- assumption: linear time invariant (LTI) system
- input time series * impulse response function = output
The model
generate the model (regressors):
- experimental design (expected time series at neural level) β> modelled response (regressor) under assumption of linearity
fit reference model:
y_t = betaX_t + epsilon_t
data = linear weighting parameterreference function + residual noise
- choose beta to minimise sum of squared differences (not too high/low)
- also with multiple reference function (ax1+bx2β¦)
- display convention: design matrix (all x in one matrix * with matrix of all beta) β> we can just keep adding more columns to the design matrix
- assumption = residuals are normally distributed
Other useful regressors
- mean
- cosine (filter fluctuations)
- motion parameters β> pivoting point is little affected by movement so this can be used to account for movement at a different point
- finite impulse response functions:
β> liberate from assumptions
β> get different responses for the time window: arbitrary response time function β> free model that can capture the signal over time
β> use within GLM
signal
= fluctations around the mean grey value of the image
- the typical percent signal change is only around 1%
How do we know a given beta estimate reflects βrealβ activity and doesnβt just reflect noise?
First and second level statistics
- testing the mean across the population (is voxel v_i activated by task?)
Summary statistics approach:
get parameter estimate from same position in each subjectβs brain form first level model
Second level statistics
- one-sample t-test against a parameter mu_0 (= typically 0)
statistical parametric map (SPM)
= a map showing color-coded t-values (ie statistical parameters) where t-test is significant and greyscale anatomy in other locations (for orientation)
Significance and Type I versus Type II errors
- type I error: alpha-error - false positive β> over liberal
- type II error: beta-error - false negative β> over conservative
choose alpha wisely (0.05/0.01/0.001)
multiple comparison problem
- more than one channel (multiple voxel locations)
- probability of a false positive when testing 1 voxel at alpha = 0.05 is p=0.05
β> mulitple comparisons: false positives versus survival probability of no fp - family wise error rate = probability of fp >= 1 with n tests
- probability of at least one fp increases β> Bonferroni correction can be applied (alpha_adjusted = alpha/n, very conversative)
- not independent
Gaussian random field theory
aim:
- obtain an estimate of the independence of voxelwise tests
Resel = block of values the same size as smoothness FWHM
can be used for a less conservative estimate of dependence