week 1 chapter 4 Flashcards
Agnosia
difficulty recognising objects due to brain issue. Cannot recognise the stimulus (may be visual, auditory, tactile agnosia etc).
Apperceptive Agnosia
inability to assemble the various components of an object, into an integrated perceived whole.
*
Associative agnosia
can see but cannot make sense of it or unable to appreciate its function, although may have encountered it before.
Feature Net
Model explaining how detection of features might activate detectors (eg when see a letter or part of a letter). When detector receives input, activation level is increased. sufficient activation will trigger response threshold, detector fires, and leads to increased activation of next detector in chain.
Lateral Occipital Complex
brain region involved in object recognition
*
Bottom-up Processing
Also called Data driven processing. recognition/perception driven by details of features
Top-down Processing
also called Concept driven processing.Less about the features, and more about the context to make sense of the features.
Visual Search Tasks
subject asked to examine a display and judge whether particular item present or not. as combination of features etc requested gets more complex, discovery time increases.
Integrative Agnosia
arises from damage to parietal lobe.
integrative agnosia has symptoms of both apperceptive agnosia, and associative agnosia. can usually achieve drawing an object, but it is very labour intensive and effortful.
WHAT IS THE EVIDENCE that features play a special role in object recognition?
we are very efficient in simple visual search tasks (eg where a unique item amongst non unique), but become slower when need to search for a combination of features hidden amongst other combinations of features etc.
Tachistoscopic Presentations
Special device (outdated now), which showed visual stimulus for precise length of (short) time. Now use computers.
Repetitive Priming
causes increased recognition because have viewed recently.
Word Superiority Effect
a letter is actually easier to recognise when it is within a word. demonstrated by being flushed a letter, then a mask (or blind) then asked whether was option a) or b). Compared with when flashed a word then asked a) or b) . better at picking the whole word. Only applies when words are actual, not gobbledygook.
Well-Formedness
this applies to a few letters as a combination. well-formed=the combination is very common as a word string (part) in the studied language. as well-formedness increases, so does the Word Superiority Effect.ie even an incomplete word, which is a very common string, will be easily recognised.
Summary of how easier to recognise (making mistakes)
easier to recognise words than letters, and more so well known words or even well-formed strings. uncommon miss-spellings forming non words that are close to common words, are frequently mis-read as the
word.
Desirable Difficulty
sometimes, if text is just a little harder to read (due to font, layout etc), one is forced to pay more attention, and sometimes this is advantageous to comprehension
Font Effects Summary
a) .one study showed text in harder to read font, made reader think author less intelligent (reader struggling to read blamed author for not being clear)
b) .All capitals harder to read as less distinct features.
Feature Nets & Activation level
A Feature Net is a Model for how letters/words/features might be recognised. Maybe each feature detector is a neural network. proposed that info is bottom up. Activation level of a detector likely to be influenced by frequency or recency.
*
Response Threshold
thought in feature net model that with sufficient activation, detector reaches response threshold and fires (similar theory to a neuron).
Recency and frequency
in feature net model, thought frequency or recency of having fired, leaves activation level sitting higher (so will reach threshold with minimal input)
Bigram Detector
explanation of a mid-level detector in a feature network, which will detect pairs of letters.
*
Visual processing Pathway Theory
proposed to consist of; feature detectors, letter detectors, bigram detectors, word detectors. *
Recognition Errors
*
Distributed Representation(as opposed to locally represented)
The Feature Net seems capable of “deciding” or “inferring” corrections as necessary. Yet it is proposed that the Net does not actually “know” anything about language rules (not locally recognised), but rather is purely automatic with vast networks of interconnections and “decision” as to what is right is based on the relative activations of the collective (distributed knowledge).