Help Flashcards
(30 cards)
What is the limitations of DDM and how does LBA address these?
- DDM largely limited to two-choice tasks: Everyday decision making often involves many different options
( Solution: Racing accumulator models, eg. LBA) - Several accumulators each gather evidence in a race against each other
- First accumulator to reach threshold ‘wins’ and determines the overt response
- Evidence for one option does not necessarily mean evidence against the other option(s)
- Can model any number of choice options
LBA
* Omits nonlinearities of DDM
* Evidence accumulates linearly for all responses, without moment-to-moment variability (within-trial noise)
* Accumulation continues until a response threshold is reached
* The first accumulator to reach threshold triggers the overt response
Trade offs for using data from available atlases?
Re-using data is good = saves lot of time, u can check validity of the original researcher (which is good for original researchers also)
Cons of re-using data: systematic bias
HPC (+ others using similar approach?), pros and cons
HCP = MRI (structural, functional, diffusion), genetics, behavioral testing.
In vivo. Mani participants. Probabilistic “to an extent”?
AHEAD = longitudinal, only standard mri (i think). Also post-mortem?
Brainetome = dMRI, sMRI, fMRI, brain parcellation algorithms. Probabilistic
Pros: large datasets that are made available, many participants = more representative + you can analyze interindividual variation, can study activity and behavior also
Cons: “you have to hope they have what you are looking for”,
Post mortem case study approach pros and cons
Big brain = Histology, high-resolution imaging. only 1, but suuper detailed.
Juleich Brain = 10-20, probabilistic. Histology + MRI
Pros: More resultiotion because Post-mortem (can see individual neurons), no motion artifacts,
Cons: few samples, tissue distortions, veeery time consuming to do even 1 brain so hard to do many even if u have many, some (e.g Juelich brain) relies on old data
WikiBrainstem
Methodologies: Histology, MRI, literature review (open-access database).
Sample Type: Both post-mortem and in vivo data referenced.
Number of Brains: Many, aggregated from existing studies
What can and cannot be picked up by EEG/MEG
Can:
* Synchronous EPSPs and IPSPs
* Spatially aligned pyramidal cells
* Cortex
Can’t: annot be picked up by EEG/MEG:
* Action potentials
* Neurons that are not spatially
aligned (inhibitory interneurons,
spiny stellate cells)
* Deep (subcortical) structures
Cons EEG
- EEG/MEG is noisy
- Sensitive to artifacts
- Brain activity gives weak signals
- Bad spatial resolution
- “the inverse problem”
ERP
Change in signal amplitude provoked by a stimulus - Averaging across many trials, ERPs vary across brain
Modulations by tasks of ERP amplitude and latency
Frequency domain (EEG)
in u have to take into considerations frequency domain: frequency bands have distinct functions and roles - different neuronal populations have different ffrequency bands
Two hypotheses for the speed-accuracy trade-off
two main pathways that connect the cortex to the basal ganglia: the cortical-striatal pathway and the cortical-subthalamic pathway.
The striatal hypothesis: emphasis on speed promotes excitatory input from cortex to striatum -> increased baseline activation of the striatum acts to decrease the inhibitory control that the output nuclei of the basal ganglia exert over the brain -> faster but possibly premature responses.
STN hypothesis: emphasis on accuracy promotes excitatory input from cortex (e.g., anteriorcingulate cortex) to the STN; increased STN activity may lead to slower and more accurate choices.
Results showed that individual tract strength between pre-SMA and striatum translate to individual differences in the efficacy with which people adjust their response thresholds: supports STRIATAL hypothesis
Standard space)
Standard space refers to a common coordinate system or template used to align and compare brain data from different individuals. It provides a consistent framework for analyzing and sharing neuroimaging results across studies and subjects. Brain atlases created in standard space allow researchers to map brain regions, connectivity, and functions systematically. E.g MNI
Basic MRI setup
Magnet - Provides strong magnetic field (the protons align w this)
gradient coils - localizes MR signal by creating small variations in x y z direction (frequency encoding - MR images are reconstructed from Fourier
domain back into anatomical space)
RF coil - disturbs the protons (un-aligns them with B0 + causes them to synchronize) by RF waves
What is measured in eye tracking and how?
Saccades - behavioral response
pupil diameter - cognitive load/arousal
Gaze fixation direction & time - which visual cues are being picked up by the participants
Video-based eye tracking systems measure gaze direction and pupil
diameter via reflection of infrared light
Pros and cons UHF MRI + challenges
- Better resolution, more structures
- fast translation potential to clinical imaging
- Stronger links between MR images and biophysics (qMRI)
Cons:
Lot of problems with UHF,
(more noise)
What do you measure in MRI?
Proton density: Tissues with high hydrogen content (e.g., water and fat) emit stronger signals, while tissues with low hydrogen content (e.g., bone) emit weaker signals.
T1 Relaxation Time (Longitudinal Relaxation): The time it takes for excited protons to realign with the main magnetic field after the RF pulse is turned off.
T2 Relaxation Time (Transverse Relaxation): The time it takes for the protons’ spins to lose coherence (dephase) in the transverse plane after excitation.
T2* Relaxation: A variant of T2 relaxation that includes inhomogeneities in the magnetic field, often used in functional MRI (fMRI). T2* effects are used to detect changes in blood oxygenation levels in fMRI, among other applications.
What do T1- and T2- weighted imaging capture?
T1-weighted images highlight fat and certain structures with shorter T1 times, such as white matter in the brain. T2-weighted images highlight water content, making them useful for detecting edema, inflammation, or fluid-filled structures
What does BOLD measure? What neural process does BOLD most closely correspond to?
Ratio of oxygenated to deoxygenated blood - metabolic process (but affected by neural activity Local field potentials.
Methods for myelin mapping
- In animals:
- inject a tracer
- Fluorescent protein
- In humans:
- Polarized light imaging
- Diffusion Weighted Imaging
qMRI
Instead of contrasts, qMRI generates exact numbers
- Objective, consistent numbers
- Less dependent on scanner settings or visual interpretation
- Quantitative parameters of tissue (e.g., T1 relaxation, iron content) often based on multiple weighted images
- Biological information on tissue microstructure * Standardized values compare across scanners, studies, populations
Diamagnetic and paramagnetic
- Areas with more oxygenated hemoglobin = diamagnetic
+ Doesn’t mess with magnetic field
+ Less signal distortion (slower T2* decay) = a stronger BOLD signal. - Areas with more deoxygenated hemoglobin paramagnetic
+ Creates field distortions
+ More signal distortion (faster T2* decay) = weaker BOLD signal.
Diamagnetic = no unpaired electrons ->
no magnetic field inhomogeneities/ no interaction with external magnetic field -> preserving phase coherence ->more uniform magnetic environment
= a stronger MRI signal.
Paramagnetic = unpaired electrons ->
magnetic field inhomogeneities/ interacts with the external magnetic field ->
localized magnetic field distortions -> reduce phase coherence -> faster T2* signal decay
2 main experiment designs for fMRI
Block Design
Subjects perform a single task continuously within each block (30s to a few minutes)
BOLD builds up, generating strong, consistent responses
Each block followed by period of rest/fixation = baseline
Trails within blocks are averaged to create a broader BOLD signal
Event related design
Present brief, separate tasks/stimuli in randomized order
* Separated by variable intervals (jittering) isolate the brain’s response to each
specific event.
= rich info about timing and dynamics of brain responses to specific events.
5 steps of fMRI preprocessing
Corrects scan limitations:
1. Slice time correction: Aligning all slices to one reference slice time)
2. Motion correction: realigns brain images to match reference position)
Compare individuals/groups
3. Co-registration: Map brain activity from functional scans onto structural scans by aligning
data
4. Normalization: Aligning each participant’s brain data to a standard template (register to standard space
Reduce noise
5. Spatial smoothing: Averaging the signal across neighboring voxels using Gaussian filter (reduces random noise and gives view of the ‘true signal’)
6. Temporal filtering: Removes low frequency signal drifts(hardware problem that makes signal increase over time = Stable signal that is closer to zero on average
fMRI data analtsysis + output
event related design
GLM STEPS:
1.Generate Predicted BOLD:
Convolve stimulus timing with HRF for expected BOLD response.
- Fit Model:
OLS regression to fit predicted BOLD to actual voxel signals.
Produces beta values for stimulus responses and baseline activity.
- Test Significance:Apply t-tests to beta values to identify significant voxel responses.
Output:
- PE (Parameter Estimate) Images: Show voxel responses to each stimulus.
- COPE (Contrast of PEs): Highlight differences between stimulus conditions.
block: convolve time block instead of discrete event
EEG preprocessing
Conversion:
Convert raw EEG data from the proprietary format of the recording system into a standard format (e.g., EDF, BDF, or BrainVision) compatible with analysis software.
Filtering:
Apply bandpass filtering to retain frequencies of interest (e.g., 0.1–50 Hz) and remove unwanted noise like DC drift and high-frequency interference. A notch filter may also be used to remove powerline noise (e.g., 50/60 Hz).
Down-Sampling:
Reduce the sampling rate of the data (e.g., from 1000 Hz to 250 Hz) to decrease file size and processing time, while retaining sufficient temporal resolution for analysis.
Epoching:
Segment continuous EEG data into discrete time windows (epochs) around specific events (e.g., stimulus onset) based on experimental design. These epochs are used for time-locked analyses.
(Re-referencing):
Adjust signals by re-referencing to a new reference electrode or the average of all electrodes to improve signal-to-noise ratio.
Artifact Rejection:
Identify and remove noisy segments or correct artifacts (e.g., eye blinks, muscle noise) using automatic or manual methods to ensure cleaner data for analysis.