Flashcards in Deciding category membership Deck (67):
What did Tienson (1988) state?
That an account of general terms must explain how we are able to apply words we know to new objects - how we decide category membership. There must be objective characteristics that we recognise. For example, people are likely to identify a triangle with its corner cut off as a triangle, even though it technically isn't.
What are the different kinds of mental representations of objects?
Abstractions and veridical. Rules are abstractions, as they include smaller computational information and are not based on memory or similarity. Exemplars are veridical - they're memory- and similarity-based. These are both used for prototype features (average of category), which are an abstraction in that not all the exemplars are used, but are veridical in that the prototype is the average example.
How does Medin & Shaffer (1978)'s context theory explain deciding category membership?
We make a series of comparisons to stored exemplars from both the category containing a similar exemplar and alternatives. It doesn't depend on an explicit logical classification rule or any general category representation (e.g. prototype theory).
What are examples of similarity-based theories, and how do they work?
Prototype/exemplar theories. Similarity is assumed to depend upon the proportion of common vs. distinctive features between 2 objects, of which the second is either a prototype or a set of exemplars.
What is the prototype in prototype theories?
The prototype is an idealised version of the category (irrespective of memory).
What is the difference between similarity judgements in prototype and exemplar models?
In prototype models, the similarity judgement is made based on one category prototype (from central tendency), whereas in exemplar models it is based on comparison between many exemplars from memory.
Why is category learning studied in laboratory settings?
Because category learning must be carried out on something new, as pre-emptive information would affect participants' performance.
What did Posner & Keele (1968, 1970) do?
Studied prototype abstraction, presenting participants with dot patterns that belonged to one of three categories, each of which were formed by generating a random pattern (the prototype) and distorting it to form exemplars.
Participants studied the high distortions of the prototype during the study phase, and then were tested on (had to classify) prototypes, low and high distortions, and random dots.
What did Posner & Keele (1968, 1970) find?
- Prototype: 85%
- Low prototype distortions: 63%
- High prototype distortions: 60%
- Random: 38%
Although participants had never seen the prototypes they tend to classify prototypes and low distortions more accurately than the highly distorted exemplars.
What do Posner & Keele (1968, 1970)'s findings suggest?
That participants abstract away from exemplars, discard them, and represent a prototype, similarity to which is the basis for novel item categorisation. This supports prototype rather than exemplar models.
What is Multi-dimension scaling (MDS)?
A set of data analysis techniques which display the structure of distance-like data as a geometrical picture. It pictures the structure of a set of objects from data that approximate the distances (similarity measure) between pairs of the objects.
How is MDS used to distinguish between exemplar and prototype models?
Standard procedure + participants asked to rate similarity of every possible pair (1=most dissimilar, 9=most similar), then the similarity data (MDS) is entered into both models; summed similarity to category members and non members and prototypes are calculated.
According to MDS studies, which model proved a better fit?
The exemplar model - the prototype model over-predicted correct classifications for all the prototypes and under-predicted for low and high distortions.
How can exemplar theories be tested neuropsychologically?
Anterograde amnesia, characterised by a severe deficit in forming new episodic memories, provides a test between exemplar and prototype theories.
What did Squire and Knowlton (1995) do?
Repeated Posner and Keele's dot prototype experiment with patient E.P., whose anterograde amnesia was so profound that after 30 testing sessions he doesn't recognise the examiner and denies having ever been tested. They wanted to know if he could store test items, and whether he classified according to prototype or exemplars - if there's a prototype effect on classification performance then the result cannot be due to the storage of exemplars.
What did Squire and Knowlton (1995) find?
Compared to controls, EP's classification scores were very similar (still endorsed in the order PLHR), yet his recognition scores were much lower than controls. Therefore he showed prototype enhancement, demonstrating that this does not require episodic memory. This supports prototype theories.
What did Palmeri & Flanery (1999) do and find?
Induced profound amnesia (sham subliminal preparation, nothing in study phase) in undergraduate participants to eliminate pre-exposure to category exemplars. Found the prototype enhancement effect despite the fact that there were no memories of exemplars from which to extract the prototype from.
What do Palmeri & Flanery (1999)'s findings mean?
That Squire and Knowlton (1995) may not provide valid support for prototype models.
What did Rips (1989) do?
Asked participants asked to consider a circular object that was exactly halfway in size between two categories, of which one was fixed (American quarters) and one variable (pizzas) and were asked to state whether it was more likely to be pizza or a quarter or how similar the object was to the two categories.
What did Rips (1989) find?
Participants said the object was likely to be a member of the variable category but tended to say it was more similar to the fixed category.
What did Rips (1989) argue?
That category knowledge is informed by theoretical (rule-based, symbolically represented) knowledge, not just by similarity.
Define mental rules.
Mental rules are generalised rules about the world which are then applied to specific occurrences. Collections of rules therefore store knowledge and such knowledge is arranged into theories.
What is assumed about the mental structure of rule-like knowledge?
That it is the same as explicitly described theories in science.
What are rules key for?
- Language - Chomsky (based on rule, cannot learn from exemplars) - e.g. children extract rules of grammar from their parents.
- Probability – Bayes
- Logic – Braine
- Arithmetic, physics, social conventions etc.
What were early attempts to model the human mind based on?
Production rules , e.g. IF the GOAL is to drive a car AND the car is in first gear AND the car is going more than 10mph THEN shift the car into second gear.
What did Langston and Nisbett (1992) state about rule-based behaviour?
That “behaviour is based on a rule if no difference is observable between performance to trained (old) and untrained (new) stimuli that fall into the same category”.
What did Elman (1996) state about rule-based behaviour?
Behaviour is based on a rule if an associative explanation cannot be found.
What did Reber (1967) do?
Had participants memorise grammatical sequences (training), then tested them on their ability to discriminate between new grammatical and ungrammatical sequences.
What did Reber (1967) find, and what does this suggest?
Participants' discrimination was 70% correct, which suggests that they extracted something about grammar because they can apply their knowledge to previously unseen items and tell whether they're well-formed or not.
What did Knowlton, Ramus & Squires (1992) do?
Repeated basic P+K experiment with amnesic patients.
What did Knowlton, Ramus & Squires (1992) find?
Preserved categorisation but impaired recognition.
What did Knowlton, Ramus & Squires (1992) conclude?
That categorisation and recognition were predicated on independent processes (and brain regions) and that categorisation is an implicit process. This implies that similarity and rules are dissociable processes.
What did Vokey & Brooks (1992) do and find?
Designed a set of stimuli in which grammaticality was orthogonal to similarity. Found main effects of grammaticality and similarity, implying the two processes are additive and dissociable. This further supports the dissociability of rules and similarity.
What did Knowlton & Squire (1994) do?
Repeated Vokey and Brooks (1992)'s experiment with amnesic patients.
What did Knowlton & Squire (1994) find?
No effect of similarity but preserved effect of grammaticality, supporting the idea of grammaticality (rules) and similarity being dissociable.
What did Johnstone & Shanks (2001) do?
Created strings of letters from a biconditional grammar, then tested artificial grammar learning for memorisation and hypothesis-testing groups.
What did Johnstone & Shanks (2001) find?
No evidence that memorisation led to passive abstraction of rules or encoding of whole training exemplars - hypothesis group showed clear effect of rule-based classification but no effect of similarity, whereas the memorisation group showed no evidence of learning at all.
What did Kinder & Shanks (2001) do?
Tested whether amnesic patients provided valid evidence for distinguishing between a rule-based and a memory-based system. They did this by simulating their behaviour using a simple similarity-based neural network.
What did Kinder & Shanks (2001) find?
In their model the difference between a recognition and a categorisation decision is simply one of difficulty.
What can be concluded from laboratory research into category membership?
- Artificial concepts (termed nominal kinds or artifacts) can be acquired in the lab.
- Some form of similarity-based categorisation model (either prototype or exemplar) best describes the categorisation process.
- Sometimes people behave as if they have more elaborate conceptual structures
- Stronger evidence for theory-based categorisation comes from the acquisition of concepts that cohere independently of human intervention (termed natural kinds)
Define nominal essence.
The combination of ideas we use to sort things into a particular class which correspond to our abstract ideas of things' defining observable features, each of which are necessary and sufficient. Give rise to classical theories of conceptual structure.
What is classical theory in terms of conceptual structure?
It's assumed that conceptual classification is based on logical rules (derived from Linnean Taxonomy); i.e. the logical structures of most concepts is a conjunction of necessary features and each feature is both necessary and sufficient, subordinates inherit the features of the superordinate, and all examples are equally representative and share all the features.
What critique of classical theories was put forward by Hampton (1982)?
That some features aren't inherited by subordinates - according to participants, chairs are a type of furniture, car seats are a type of chair, but car seats aren't a type of furniture.
What critique of classical theories was suggested by Rosch (1973)?
That some instances are better examples of a concept than others, for example a robin is rated as more typical of birds than a canary. There are also cultural differences.
What critique of classical theories was suggested by both Ryle (1951) and Wittgenstein (1958)?
That some concepts seem to have no defining features, for example 'game' - solitaire has nothing in common with football, and fun can't be the defining feature, because of Russian roulette! Therefore some concepts cannot be described with rules.
What did Warrington and McCarthy (1983) do?
Described two stroke patients who were relatively more affected on artefacts than natural kinds. Therefore natural kinds and artefacts are neuropsychologically distinct.
What did Warrington and Shallice (1987) do?
Described four herpes encephalitis patients who were reliably more impaired on natural kinds than artefacts, in contrast to Warrington and McCarthy (1983).
What brain areas are affected in people who have deficits with living things (natural kinds)?
The inferior and medial temporal cortex (often anterior), brain areas which are near visual object recognition areas and relay projections to the hippocampus.
What brain areas are affected in people who have deficits with man-made things (artefacts)?
Perhaps the left frontal and parietal areas are involved, but there's less specificity in localising these deficits.
What does Warrington propose regarding the distinction between artefacts and natural kinds?
That the distinctions are consistent with an organisation based on visual features (natural things) and function (made-made things).
What did Kripe (1972) and Putnam (1975) argue about natural kinds?
That natural kinds cannot be defined in terms of clusters of features or on the basis of similarity. Nominal essences are neither necessary nor sufficient for natural kinds (e.g. on Titan, ammonia is a colourless liquid, on Earth water is a colourless liquid - same nominal essence, different natural kind).
What did Kripe (1972) and Putnam (1975) argue about real essences?
That kinds are defined by their underlying structures, e.g. water is composed of H2O, ammonia of NH3. They have a different kind and different real essence.
What is Keil's (1989) continuum?
Natural kinds and artefacts can differ in representation complexity from well-definedness to richness, with one-criterion terms as pure nominal kinds (well-defined) and biological kinds etc. at the other end of the scale (rich) - pure natural kinds.
The view that people's representation of categories rely on some inferred essence.
What did Medin and Ortony (1989) suggest about essentialism?
That often people act as if things have essences or underlying natures that make them what they are; concepts contain an essence placeholder composed of features and theories, which provide causal linkage to superficial properties (being a tiger causes stripes). For example when we judge whether someone is male or female we tend to judge on the basis of features such as hairstyle and clothing, but we still consider a male transvestite or transsexual to be a man despite characteristic features being predominately female - we characterise on the basis of some inferred underlying essence.
What are the defining features of living things?
Motion, reproduction, consumption, growth and stimulus response.
What is the problem with defining living things from certain features?
Fire has all of these properties, but we wouldn't want to describe it as living.
What did Rips (1989) do?
Told participants two versions of the 'sad story of the sorp', whereby the sorp (originally a bird) became more insect-like in adulthood, either as part of a natural process or due to chemical waste. Then asked participants to categorise the sorp, rate it's typicality in the category and similarity to other category members.
What did Rips (1989) find?
In the sad version of the story (pollution), participants tended to categorise the sorp as a bird but rated the sorp as more similar to an insect, i.e. its essence was avian but was similar to an insect.
In the not sad version, participants tended to categorise the sorp as an insect and overall tended to rate the sorp as most similar to a bird i.e. its essence was insectoid but one that is similar to birds. This shows that if development is abnormal, participants tended to stick to the essence of the category.
What did Carey (1985) do?
Investigated the development of conceptual knowledge by showing children aged between 4-10yrs a mechanical toy monkey that could move its arms to bang a pair of symbols.
What did Carey (1985) find?
Across the age group all the children rated the toy as being more similar to humans than any other animal, and all but the 4yr olds denied that the monkey could breathe, eat or have babies - they knew that despite the toy's similarity to humans it doesn't share characteristics that might describe the essence of living things.
What did Keil (1986; 1989) investigate?
Whether categories defined by their essences are stable - showed children pictures of a horse that's painted with stripes so it looks like a zebra.
What did Keil (1986; 1989) find?
Children aged eight and above said that the animal remained a horse, whereas younger children said the animal is now a zebra. The effect doesn't hold for artefacts e.g. coffee pot to bird feeder.
What did Funnel and Sheridan (1992) do?
Investigated whether there's a difference between living things and artefacts. In Snodgrass and Vanderwart's pictures, artefacts tend to be low frequency objects relative to living things, so from any random sample, artefacts will be relatively harder than living things. However this doesn't explain the double-dissociation.
What did Farah (1994) argue?
That artefacts tend to be distinguishable on the basis of function and living things on the basis of form. At least 1 of the patients studied by Warrington et al. was just as impaired at naming musical instruments (categorised by form not function) as living things. However this doesn't explain why natural kinds, but not artefacts, are resistant to transformations.
What is a criticism of essentialism?
Young children begin with similarity based concepts, and as they develop, their conceptual structure becomes more theory-like. Although people appear to use theory-based essentialist beliefs to categorise the world, by their very nature essences are hard to define or articulate. It follows that in practice, especially with difficult cases, people seem to rely on similarity or form (natural kinds) or function (artefacts).