Reading & the Brain Flashcards

1
Q

COHEN ET AL. (2000): ‘VISUAL WORD FORM’ AREA

A
  • left ventral occipito-temporal cortex
  • “brain letter box”/gate to reading system
  • activated specifically by letter strings acceptable in language (ie. NGTH but not TGNH in English) & via existing words (Glezer et al. (2015))
  • area proximity to other regions coding faces/objects supports idea that brain has evolved to perceive letters/words as complex objects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

WHY IS THE RIGHT VISUAL FIELD CODED BY THE LEFT HEMISPHERE (VICE VERSA)?

A
  • inner retinas send info across to opposite hemispheres; outer retinas send info to hemispheres on same side
  • aka. what is presented in right hemi-field -> left hemisphere (vice versa); NOT projection
  • relates to before info in right occipital pole crosses via corpus callosum to reach brain letter box/visual form area (normally developing in left hemisphere)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

COHEN ET AL. (2000): VISUAL STAGES

A

S1) Processing restricted to contralateral hemisphere (V4; occipital pole)
S2) Info transfer from both occipital poles -> ^ ventral region of left occipital love (visual word form area) via corpus callosum (hemisphere bridge) from right -> left; goes straight there if left

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

COHEN ET AL. (2004)

A
  • kids’ reading networks = more flexible > adults
  • early age lesion -> visual word form area can swap to right hemisphere
  • highly plastic brain nature early on allowed development of visual word form area in the right hemisphere (as corresponding left hemisphere region is not there anymore)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

MARINKOVIC ET AL. (2003)

A
  • looked at time-course of visual word processing using magneto-encephalography (MEG)
  • estimated peaks of cortical activity & progression over time
  • reading = activation starts in both occipital poles
  • 170ms = shifts to left occipito-temporal region
  • 230ms = activity explodes in both temporal lobe regions
  • 300ms = extends over prefrontal/other temporal regions esp. in left hemisphere before falling back in part to posterior visual areas (occipital pole)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

CATANI ET AL. (2003)

A
  • U fibres (occipito-temporal projection system (OTPS)) = info transport port-to-port; located laterally to ILF; connect adjacent gyri of lateral occipito-temporal cortices to form OTPS
  • inferior longitudinal fasciculus (ILF) fibres = work like motorways
  • both in right hemisphere
  • particularly important in left hemisphere (reading system) to transfer info from ventro-occipital regions (VMFA) -> posterior frontal lobe areas/temporal regions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

GLEZER ET AL. (2015): PROCEDURE

A

VENTRO-OCCIPITAL REGIONS (VMFA)
- repetition suppression = stimulus repetition leads to reduced neural response
- if VWFA only codes info about familiar letter strings (“ght” (S1) VS “htg” (S2)) -> the more S2 is similar to S1 the weaker the neural response
- ie. vight-vight (S) < pight-vight (1L) < falm-vight (D)
- BUT if VMFA contains neuron populations each coding a familiar word then repetition suppression should be abolished by just changing 1 letter between 2 real words (did find this!)
- ie. right-right (S) < light-right (1L) = calm-right (D)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

GLEZER ET AL. (2015): RESULTS

A
  • VMFA contains neuron populations each coding familiar word; repetition suppression = abolished by changing 1 letter between 2 real words
  • pseudoword pairs showed graded effect; real words/trained psuedowords showed all-or-none repetition suppression
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

TAYLOR ET AL. (2012): BACKGROUND

A
  • cognitive models make predictions about functional overlap between brain regions involved in reading
  • word > nonword
  • irregular > regular
  • aka. brain regions sensitive to such contrasts should correspond to model’s components (ie. input lexicon/grapheme-to-phoneme transcoding)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

TAYLOR ET AL. (2012): PILOT PROCEDURE

A
  • examined if 36 neuroimaging studies of reading pointed at same brain regions regarding 2 main contrasting DRC dimensions:
    1) lexical status (ie. words VS nonwords aka. is stimulus part of LTM?)
    2) regularity (ie. regular/irregular aka. can the word be read correctly by both routes?)
  • both provide 2 contrasts distinguishing between types of acquired dyslexia as cognitive models = partly derived from reading disorder observation
  • distinguished between engagement VS effort in how dimensions translate into BOLD (blood oxygen lvl dependent) signal (ie. oxygen amount needed by regions)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

TAYLOR ET AL. (2012): PILOT METHODOLOGY

A
  • peaks in BOLD signal do NOT = region “liking” stimulus; instead that it’s having a hard time processing it
  • engagement = whether brain region in question can deal w/stimulus; aka. capacity of stimulus to evoke knowledge in said region
  • effort = amount of resources/fuel required to code/process region stimulus once region = engaged
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

TAYLOR ET AL. (2012): PILOT

A
  • first tested if distinction between engagement/effort made sense in DRC
  • check relevance by looking at activity in input lexicon (computerised version)
  • results obtained by giving computer-implemented version of dual-route model word/nonword list which model had to recognise/reject
  • words in input lexicon earn points in proportion to matching stimulus (ie. letters in correct positions); the more they earn points/frequency in language, the more able they are to deplete competitors; word is recognised once its activation lvl = ligher than its competitors by minimum distance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

TAYLOR ET AL. (2017): PILOT RESULTS

A
  • confirm engagement/effort distinction = valid
  • existing words generate stronger engagement > unknown words (pseudowords)
  • longer for low frequency strings to reach same activity lvl > high frequency words
  • activity remains longer in that region of system for low > high frequency words; altogether reflects that recognition isn’t easy for low-frequency words
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

THE SUBTRACTION LOGIC: WORDS > PSEUDOWORDS

A
  • to isolate regions specifically involved in recognising existing words:
    1) take brain activity elicited by word stimuli
    2) subtract from it the activity elicited by unknown words
  • aka. this removes activity common to processing 2 stimuli kinds from brain map generated by existing words; this identifies neural activity specific to lexical processing (ie. lexical route)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

DRC MODEL: SYSTEMATIC APPROACH I

A
  • lexical pathway engages words BUT not/weakly pseudoword strings
  • taxed more by low-frequency words > high-frequency words
  • aka. 2 contrasts pointing towards:
    1) lexical route = words - pseudowords
    2) lexical route = low-frequency words - high-frequency words
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

DRC MODEL: SYSTEMATIC APPROACH II

A
  • irregular words should engage lexicon ^ > non-orthographic strings (ie. &%^%)
  • latter = not represented in lexical memory; obvs not gonna activate any word neighbour
  • mental dictionary should be taxed ^ by irregular words > regular esp. less-frequency words in language (faint lexical representations)
  • lexical memory = only way to reach exception words aka. low frequency irregular words -> ^ effort
  • aka. 2 contrasts pointing towards:
    1) lexicon = irregular words - non-orthographic strings
    2) lexicon = (low-frequency) irregular - regular
17
Q

DRC MODEL: REGULARITY CONTRASTS

A
  • points specifically to lexicon
  • excludes semantic system as model does not assume ^ engagement of semantics for irregular words > regular
18
Q

NON-LEXICAL PRINT-TO-SOUND CONVERSION ROUTE

A
  • contrasts regular PTSCR
  • should engage both words/psuedowords BUT not non-orthographic strings (not linguistically relevant)
  • should also be taxed ^ by pseudowords > words as even though pseudowords engage non-lexical route, it’s less accustomed to them VS regulars; same applies to phoneme output system (ie. inner speech)
  • words are also taken care of by lexical route aka. engagement of all parts in word case = ^ even
19
Q

NON-LEXICAL ROUTE & PHONEME OUTPUT SYSTEM

A
  • both should be struggling w/irregular words as DRC produces 2 conflicting responses
  • w/this arising conflict it’s likely that activity should linger particularly at lvl of phoneme output system w/some reverberation upwards into print-to-sound conversion region
20
Q

LEXICO-SEMANTIC PATHWAY

A

1) words - pseudowords
2) low-frequency - high-frequency

21
Q

INPUT LEXICON

A

1) irregular words - non-orthographic strings
2) (low-frequency) irregular words - (high-frequency) regular words

22
Q

CONVERSION ROUTE & PHONEMIC OUTPUT BUFFER

A

1) pseudowords - non-orthographic strings
2) irregular words - non-orthographic strings
3) pseudowords - regular words
4) irregular words - regular words

23
Q

SUMMARY I

A
  • 1 component may be isolated by multiple contrasts:
    LEXICO-SEMANTIC PATHWAY
    1) engagement = words - pseudowords
    2) effort = low-frequency - high-frequency words
    INPUT LEXICON
    1) engagement = irregular words - non-orthographic strings
    2) effort = (low-F) irregular words - (high-F) regular words
    PRINT-TO-SOUND CONVERSION/PHONEME OUTPUT BUFFER
    1) engagement = pseudowords/irregular - non-orthographic strings
    2) effort = pseudowords/irregular - regular words
24
Q

TAYLOR ET AL. (2012): PROCEDURE

A
  • 36 fMRI/PET studies included in meta-analysis
  • same task on both words/pseudowords or regular/irregular words
    TASKS
  • reading aloud/silently
  • lexical decision (ie. is “brain” a word?)
  • visual feature detection (ie. is there a “x” in “brain”?)
  • phonological lexical decision (ie. does “brane” sound like a word?)
  • stimulus repetition detection (ie. “brain” x2)
  • rhyme judgement (do “brain” & “train” sound the same?)
25
Q

TAYLOR ET AL. (2012): RESULTS

A
  • print-to-sound conversion = inferior parietal cortex
  • phoneme system = inferior frontal gyrus
  • spoken response/not -> further separation between input (orthographic) VS output (phonological) lexicon & between grapheme to phoneme conversion system VS phoneme system