Exam 2 Flashcards

(30 cards)

0
Q

What did Sternberg find in his first experiment?

A

Both slopes different from zero. First, significant difference between present and absent slopes. This difference was due to set size one. When set size one was excluded, no significant difference between present and absent slopes. This suggests serial non-self terminating search.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

Give details of Sternberg’s first experiment.

A

Question: how do you retrieve info from STM?
Task: same as our experiment but with digits
Logic: if selection of response requires retrieval from STM, then response time should tell you about retrieval process
IV: set size (1-6), target present vs. absent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a confound in Sternberg’s first experiment? What’s special about set size 1?

A

-Likelihood of a given item being the probe is dependent on set size
At set size one, the entropy of the probe is lower, so it’s easier to guess

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What’s entropy?

A

Entropy of a probe item is the amount of info in that item and the opposite of it’s predictability (so, unpredictability)
As an event becomes more probable, it carries less info and becomes more predictable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How did Sternberg set up Experiment 2? What were the results?

A

Goal: replicate exp. 1, controlling probe entropy
Method: use a fixed memory set over a whole block

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Conclusions of Sternberg’s memory search study.

A

1) Memory search is serial and non self terminating
2) The intercept of the search function = non-search processes (e.g., perceptual encoding and response selection)
3) Rate of search is faster than subvocal rehearsal –> search is not the same thing as rehearsal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What’s the horse race model?

A

Let’s say comparison process of probe to items in memory takes variable amounts of time and is done in parallel and you can’t respond until all searches have been completed
Then, as set size increases, the probability that you’ll have an item that takes a long time to compare increases –> looks like serial search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why is the horse race model not a thing in Sternberg’s study?

A

The horse race model would give a logarithmic relationship between set size and response time, but Sternberg found a more linear relationship

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What’s the difference between a category and a concept?

A
  • category: set of items in the world
  • concept: mental representation of that set
    i. e., categories are extensions of concepts
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What’s the classical view? What are some issues with it?

A

-Concept is a definition of a category: specification of necessary and sufficient conditions for category membership
Problem: some concepts have no definition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does family resemblance mean for concepts? Define probabilistic and family resemblance.

A

-There is no single feature that all category members share, but there are some that are more common
SO, categories have a probabilistic family resemblance structure
-Probabilistic: any given feature may appear in some but not necessarily all category members
-Family resemblance: category members resemble one another like family members resemble one another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a prototype of a category?

A

-Prototype: most typical, best, most central category member
Average (central tendency) of all exemplars
May or may not actually exist (e.g., most prototypical dog)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are prototype effects?

A

-prototypes are cognitively privileged
-more prototypical exemplars are…
categorized faster, listed first, share more features with other exemplars, learned earlier in childhood
-exposure to exemplars causes learning of a prototype

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Give details of study that shows exposure to exemplars causes learning of prototype.

A

Method: generate exemplars by distorting a prototype
Training: present exemplars but not prototype
Test: view exemplars and rate confidence that seen previously. Exemplars included: previously seen exemplars, new exemplars, prototype
Result: confidence that exemplar had been studied was related to similarity to prototype, not related to whether exemplar had actually been studied
Conclusion: exposure to exemplars causes learning of prototype

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What’s prototype theory?

A
  • Prototype IS the mental representation of the category – i.e., prototype=concept
  • Through exposure to exemplars, you compute their mean (prototype) and store this as the concept
  • Categorize new exemplars by comparing them to prototypes in memory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Strengths and limitations of prototype theory.

A

Strength: provides a natural account of prototype effects
Limitations:
-fails to specify variance (how much deviation from prototype is okay?)
-fails to specify relation among exemplar’s features
-predicts incorrectly that only linearly separable categories are learnable
assume categorization based on feature-based similarity

16
Q

What is exemplar theory?

A
  • Store all category exemplars: exemplars are category representation
  • Classify new exemplars by matching to most similar exemplar in memory
17
Q

Strengths and limitations of exemplar theory.

A

-Strengths:
can account for prototype effects (if you store all the data, you have access to the mean when needed)
captures variance (and max, min, etc.)
captures correlations among features
can learn non-linearly separable categories
-Limitations:
assumes categorization based on feature-based similarity (storing/matching features, deciding membership all based on similarity)

18
Q

What are some problems with similarity?

A

1) Similarity is intuitive but poorly defined
-similarity = shared features? any two objects share an infinite number of features
2) Similarity is context-sensitive
3) Similarity is bad at characterizing some kinds of categories
-e.g., superordinate, ad hoc
So, similarity may be an effect rather than a cause of categorization

19
Q

What is schema theory?

A

Concepts as schemas or theories describing categories of things
-schema is a relational structure: gives relations as explicit entities
Specifies relations between features (and other concepts) instead of just listing features
Provides explanatory framework for understanding properties of category members

20
Q

What’s psychological essentialism?

A

People assume objects (especially natural kinds) have an essence that makes them the way they are

  • visible features are merely a reflection of this essence
  • features do not define the concept so much as point to it
21
Q

What does ANOVA allow you to do?

A

Test multiple means (more than one independent variable) and find interactions between the variables

22
Q

Explain the logic of ANOVA.

A

We have two measures of variance:
-MSE based on variance within samples, which is a good estimate regardless if null is true
-MSM based on variance between samples, which is a good estimate only if null is true
If they agree (MSE = MSM), null is supported because it’s likely that these variances came from the same underlying distribution
If they disagree (MSE < MSM), null is rejected

23
Q

How do we deal with unequal sample sizes in ANOVA?

A

Use sums of squared error because they are additive even when sample sizes are unequal

24
What's the general rule for computing any sums of squares?
1) Square the relevant totals 2) Divide by the number of observations on which each one is based 3) Sum the results 4) Subtract the correction factor
25
What were the different conditions in the Kittur et al. study?
Feature-based and relational Each divided into: -Probabilistic: each exemplar shares 75% of features with prototype but none have all -Deterministic: ensure that one features is shared by all exemplars (by eliminating one exemplar, which leaves one feature constant across exemplars)
26
What are the two kinds of relational categories?
- defined by relations between categorized thing and other things external to it - defined by relations among features of the thing itself (the relations in Kittur et al.'s experiment)
27
What's schema induction?
Process of intersection discovery: look at what examples have in common and keep that in mind, throw away the details on which they differ
28
What were the results and conclusion from the Kittur et al. study? (training phase)
Results: probabilistic relational categories much harder to learn Conclusion: Schema induction by intersection discovery fails when you're learning probabilistic relational categories -the final schema is the empty set: it specifies nothing (because exemplars don't share any one relation)
29
What were the results from the transfer phase of the Kittur et al. study?
Subjects categorized ambiguous exemplars... in deterministic: subjects use deterministic feature that had been used in learning in relational: subjects responded randomly