Session 5 - GLM Flashcards

(15 cards)

1
Q

GLM vs generalised linear model

A

= general linear model
- similar to generalised linear model but with a multidimensional matrix as scalar and normally distributed residuals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Challenges of high dimensionality

A
  • brain data with high dimensionality –> try to figure out whether activity increases/decreases
  • test for activity changes in one brain loaction at a time –> repeat test systematically at each brain location –> mass variate approach
  • some things get left out with this approach
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How to find a region that responds stronger during stimulation than during rest

A
  • different intensity during rest and stimulus

Approach:
- divide into stimuli-no stimuli
- collapse green and red values (create histogram on the right) –> variability (noise, artefacts) –> t-test
- paired t-test: increase the sensitivity (remove noise we are not interested in)
- not always possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How to account for haemodynamic lag?

A
  • response takes a while to increase and decrease –> so it does not work when choosing time windows simply based on stimulus timing
  • shift to haemodynamics? due to distribution not possible to simply align stimuli and response –> convolution with HRF introduces not only a time delay but also a smoothing

–> solution is GLM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

model-based approach

A

What should the response look like according to our knowledge of the shape of the HFR and its linearity?

Experimental design –> HRF on expected response model –> fit measured fMRI time series

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

criteria for linearity of BOLD response

A
  • is needed for GLM
  • can be added, multiplied (a* response 1 + b* response 2 = combined response* a+b)
  • BOLD response is kind of linear but also not
  • long stimulus can be divided into shorter stimuli: get all individual HRFs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Convolution

A
  • how to take individual responses to predict response
  • assumption: linear time invariant (LTI) system
  • input time series * impulse response function = output
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The model

A

generate the model (regressors):
- experimental design (expected time series at neural level) –> modelled response (regressor) under assumption of linearity

fit reference model:
y_t = betaX_t + epsilon_t
data = linear weighting parameter
reference function + residual noise
- choose beta to minimise sum of squared differences (not too high/low)
- also with multiple reference function (ax1+bx2…)
- display convention: design matrix (all x in one matrix * with matrix of all beta) –> we can just keep adding more columns to the design matrix
- assumption = residuals are normally distributed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Other useful regressors

A
  • mean
  • cosine (filter fluctuations)
  • motion parameters –> pivoting point is little affected by movement so this can be used to account for movement at a different point
  • finite impulse response functions:
    –> liberate from assumptions
    –> get different responses for the time window: arbitrary response time function –> free model that can capture the signal over time
    –> use within GLM
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

signal

A

= fluctations around the mean grey value of the image
- the typical percent signal change is only around 1%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do we know a given beta estimate reflects β€˜real’ activity and doesn’t just reflect noise?

A

First and second level statistics
- testing the mean across the population (is voxel v_i activated by task?)

Summary statistics approach:
get parameter estimate from same position in each subject’s brain form first level model

Second level statistics
- one-sample t-test against a parameter mu_0 (= typically 0)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

statistical parametric map (SPM)

A

= a map showing color-coded t-values (ie statistical parameters) where t-test is significant and greyscale anatomy in other locations (for orientation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Significance and Type I versus Type II errors

A
  • type I error: alpha-error - false positive –> over liberal
  • type II error: beta-error - false negative –> over conservative

choose alpha wisely (0.05/0.01/0.001)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

multiple comparison problem

A
  • more than one channel (multiple voxel locations)
  • probability of a false positive when testing 1 voxel at alpha = 0.05 is p=0.05
    –> mulitple comparisons: false positives versus survival probability of no fp
  • family wise error rate = probability of fp >= 1 with n tests
  • probability of at least one fp increases –> Bonferroni correction can be applied (alpha_adjusted = alpha/n, very conversative)
  • not independent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Gaussian random field theory

A

aim:
- obtain an estimate of the independence of voxelwise tests

Resel = block of values the same size as smoothness FWHM
can be used for a less conservative estimate of dependence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly