Knowledge Flashcards

1
Q

knowledge

A

scientific research is looking to emulate, understand, and duplicate knowledge

  • can mimic our brain knowledge in robots
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

ChatGPT

A

knowledge as a robot

  • learns writing, implicitly learns grammar
  • can mimic semantic knowledge
  • large language model
  • can simulate knowledge or informaiton about the world
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

knowledge - going forward

A
  • looking forward, scientists are working on building computers with brain cells
  • the basis for the computation of information is thought to be superior if comupters can mimic the brain

to improve computer processing, need to understand:
* how we understand the world
* how to structure information

– use human knowledge as a way to understand the organization of knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

categories and concepts

A

concepts:
* mental representation of an object, event, or pattern
* decreases the amount of information we need to learn
* allow us to make predictions

category:
* class of things that share a similarity
* what are you matching incoming info to and how do you decide what’s what

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

theories of categorization

A
  • definitional approach
  • probabilistic views: prototype view and examplar view
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

definitional approach: forming concepts

A

form concepts by finding necessary and sufficient features
* these are defining features

what makes a square a square?
* something that doesn’t meet the definitions wouldnt fit in the category

rigid way about forming categories
* all or none (either in the category or not)
* good starting point but doesn’t reflect behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

definitional approach: creating categories

A

categories have rigid boundaries
* either is a square or isn’t

all members are equally good examples
* learning involves discovering defining features

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

definitional approach

A
  • people do create and use categories based on a system of defining features and rules
  • many categories do not seem to follow this process
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

problems with definitional approach

A

difficulty coming up with defining features
* wittgenstein (1953): what is a game?

not all members are equally good examples

disagree on members of categories
* how would you categorize bookends?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

probabilistic theories - prototype theories

A

category decisions made based on an idealized average – a prototpe

prototypes: have an ideal of what each category is made of

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

probabilitic theories - exemplar theories

A

an alternative to prototype theory

category decisions are made based on all of the exemplars (examples) stored in semantic memory

exemplar: categories are based on all the examples you have in your head

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

prototype view

A
  • idealized representation

take all dogs and make a prototype for all of them together

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

the prototype approach

A

Bird

high-prototypicality: robin

low-prototypicality: penguin

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

typicality effects

A

Typical:
– is a robin a bird
– is dog a mammal
– is diamond a precious stone

Atypical
– is ostrich a bird
– is whale a mammal
– is turquoise is precious stone

Atypicals have slower processing times bc they are further from the prototype

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

graded structure

A

categories and concepts do not have clear all or nothing boundaries. some members of a category are more central and some are more peripheral to the prototype.

typical items are similar to a prototype

typicality effects are naturally predicted

prototype lives at center
– measure prototypicality of an object by looking at the distance of any one object to the prototype

** can quantify typicality within a category

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

problems with prototypes

A

prototypes organized around averages

information about individual exemplars is lost
* eg. Pomeranian more similar to cat than great dane

when all you store is the average, you lose nuances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

exemplar view

A
  • store all the instances (or exemplars) of category
  • prototype is generated/abstracted as needed (not stored)

– stores all experiences, not just prototype
– prototype is generated as needed
– could have a prototype for dog and also for specific types of dogs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

the exemplar approach

A
  • explains typicality effect
  • easy takes into account atypical cases
  • easily deals with variable categories
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

characteristics of categories

A

graded membership: robin is a better bird than penguin

family resemblance: category members typically share a set of common features

related concepts:
* central tendency (prototype)
* typicality effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

organization

A

show pic of bird and ask what it is

people say one of:
* bird (most common)
* parrot
* grey parrot
* animal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

levels of categorization (Rosch)

A

subordinate level: most specific level

Basic: mid level

superordinate: most broad level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

basic level

A

the level the members share the most of the attributes of that category

“Bird” is a basic level for the picutre – most common answer

faster at processing at the basic level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

subordinate level

A

more specific than the basic level response

“parrot” is a subordinate level

24
Q

superordinate level

A

broad and more general than the basic level

“animal”

25
Q

changes in level with expertise

A

experts shift from using basic level to more subordinate levels
* dog shows

knowledge influences categorization

the subordinate level becomes their fastest response

26
Q

DRM Paradigm

A

show list of words centered around a common theme word
* people tend to insert the common theme word

27
Q

collins and Quillian: Semantic hierarchical theory

A

semantic (memory for facts)

semantic network: information that is related is linked together

semantic network theory:
– we have nodes that represent different concepts or units of knowledge
– between the nodes is how these things are connected

  • goes from more general things to more specific units of knowledge
  • proposition is that all knowledge is organized in hierarchical structure
28
Q

semantic hierarchical theory

A

nodes and links for the hierarchy
* nodes = representations of concepts
* links = representations of relationships

activation of one node spreads to related nodes
* robin would lead to activation of other nodes directly connected
* these get an activation boost
* called spreaded activation
* make the related nodes easier to process

spreading activation: excitation spreads along the connections of nodes in the semantic network

priming: primes (facilitates the activation of) related concepts — when you think about cow, its easier to process milk

29
Q

spreading activation and priming

A

mechanisms through priming is explained through spreaded activation – lets you think of everything related

word cat activates the word dog. spreading activation activates word “dog” and primes a bit

when asked to recognize “dog” you are faster because already partially activated

30
Q

sentence verification task

A

Collins and Quillian (1969)
* A _____ is/has a _____ statements

sentence verification task:
* a canary is an animal
* a canary is a chicken
* a canary has a yellow color

how quickly you cna verify a category depends on how close the information is

properties are assigned different nodes
* Canary: sing, yellow, and also has properties of what’s above it like bird (fly, feathers)

31
Q

Collins and Quillians results

A

RT was longer when number of associative links was greater

less relevant property relationships, longer RT
(a canary can breathe is longer than a canary is yellow)

property inheritance: greater distance and more connections — the longer is takes

32
Q

generalizations

A

DRM paradigm

lexical decision task

** how does spread of activation explain DRM paradigm —> boosts activation

33
Q

criticisms of collins and Quillian

A

theory cannot explain typicality effects
* Canary is a bird
* ostrich is a bird

cognitive economy now always true
* a horse is a mammal (longer RT)
* a horse is an animal (shorter RT)

^ those go against the premise of the hierarchy

due to these criticisms, other network models were then proposed
* connectionist models
* representations of concepts

34
Q

connectionist models

A

explains how the brain can process information in an algorithm
* the idea has been around since 1940s, but it became a way to explain informaiton processing in the brain later on

  • up until then, psychologists posited that information was processed serially; connectionist models proposed that information could be processed in parallel

connectionist model called: parallel processing model

35
Q

connectionist models notes

A

neural networks work similar to human brain
* connectionist models are powerful in labelling things in the world in the way that we label them
* offers more functionality
* not a new idea
* improvement of idea is the power of the computer processes we have

biggest shift from other models is looking at categorization, not in a serial process, you process everything in parallel

feeding information to neural nets and telling it the output we want but the processing is done through training, we don’t specify anything

computer figures out what the crucial part of the image are – supervised training bc people are labelling images

36
Q

connectionist models

A
  • units correspond to how neurons function in the brain
  • individual units may represent concepts, but this is not necessary

represents info in a more abstract way

think of these nodes as being individual units of knowledge
* knowledge is actually spread across nodes
* move away from thinking about each node and think about the collection of nodes

37
Q

connectionist models – parallel distributed processing

A
  • knowledge represented in the distributed activity of many units
  • weights determine at each connection how strongly an incoming signal will activate the next unit

more broad thinking makes it possible to represent information more mathematically and more easily

38
Q

connectionist models – units

A

output units: receive input from hidden units

hidden units: receive input from input units

input units: activated by stimulation from environment

39
Q

connectionist models – activation

A
  • at each point in time, each unit has a degree of activation
  • units feed their activation into other units
  • compute activation of a unit using among of activation fed into it

the weight specifies the degree to which unit A contributes to unit B

excitatory connection: positive
inhibitory connection: negative

activation rule: specifies output activation based on input activation

the way info is spread across networks is the same way we thinking about knowledge and memory being spread across our brain

40
Q

connectionist models – layers

A

activation determines the activation of the next layer

  • concepts are represented by the pattern of activation across units
  • hidden units act as abstract entities devoid of interpretation
    – although this is now changing

there could be redundancies such that multiple units could represent the same concept
* this makes the model robust to “breaking” the network

conceptual interpretation are some of the things people are testing with already established and trained models

if w can understand the hidden layers, it will help us understand information being held across different levels of the brain

41
Q

what are the hidden units doing?

A

used to have no idea but more efforts are being made to uncover the processing

they can categorize objects but we don’t know what that process looks like

42
Q

connectionist models – storage

A

cut off connections between different layers – same as experiments with ablation
* have to break a lot of connections for the model to fail

based on parallel process, but explains how brain processes info as an algorithm

neural nets concepts are stored as distributed representations

43
Q

ChatGPT

A

don’t know anything – just the frequent connection between words

same as other models but its size is what makes it different

it just strings words together

can’t make ideas but can put ideas together

should used it as a launching pad for what to write or how to phrase things later

44
Q

how neural networks are like the brain?

A
  • graceful degradation
  • generalization
  • distributed

neural networks mimic the brain quite closely

45
Q

graceful degradation

A
  • distributed representations within the network means that disrupting or breaking the system does not halt performance, only decreases it
46
Q

generalizations

A

neural networks also have a capacity to generalize from particulars

47
Q

distributed

A

the idea that memory and identity are distributed and redundantly stored, rather than localized and unique

cryonics: means precise reconstruction of the brain may not be necessary to restore memory and identity

48
Q

concepts in the brain

A

neuropsychology offers insight into how concepts and categories are represented

some approaches
* sensory-functional hypothesis
* semantic category approach
* multi-factor approach

brain damage affects how some patients are able to categorize the world

different explanations trying to figure out what went wrong that produces these results

49
Q

sensory-functional hypothesis

A

seperate semantic stores for
1. sensory or perceptual properties of objects
2. functional information related to object use

selective impairments for living and nonliving things are assumed to derive from an assymetry in the representation

representations of living things are more heavily weighted in terms of visual sensory features

representations of nonliving things are asusmed to be more heavily weighted in terms of functional features

50
Q

sensory-functional hypothesis cont’d

A

can’t categorize different types of animals

thought of the semanted network as being interconnected

seperates knowledge: * function of the semantics and overarching differences in organization
* or problem of how the info is fed to the system

hypothesis says that the thing that really matters is how that information is processed

51
Q

what matters about the sensory functional hypothesis

A

hypothesis says it is how the information is processed that matters

  • the thing that is going wrong is that processes that largely require visual and sensory information is not being uploaded to the semantic part of the brain so its interrupted

*knowledge is accessed differently depending on which thing is weighted more

kids can categorize when they turn 6 but not before

52
Q

semantic category approach

A
  • patients with category-specific semantic deficits may have selective impairments for naming items from one category of items compared to other categories
  • those patients may also have categorical impairments for answering questions about all types of object properties

trying to separate out what’s going on where is it that patients are truly having problems with

53
Q

semantic category approach – looked at differently

A

dividing up how people percieve and if its living or not living

refuting that there’s the separation

living and non living thing is just an artifacts

its the sensory part

if it was just about the inputs it would be divided across

  • because the light bars and dark bars aren’t together, it shows that the sensory functional hypothesis isn’t right
54
Q

semantic category approach

A

Mahon & Caramazza
– little bit of sensory and semantic

  • certain categories are biologically relevant for our survival, so have dedicated brain regions specialized in their processing
  • Ex: faces
  • they argue for distributed domain specific representations of concepts

somewhere in between:
* primary sensory and motor areas that have a physical organization in the brain that projects topographically onto a physical dimension
* distributed representation of human cognition
— abstract systems that make human thought and metacognition possible

certain neural networks are involved in responding to specific categories of stimuli

55
Q

multiple-factor approach

A

argue that S-F approach is incorrect, as need to take into account that categories could have overlap with different features/factors

looks at how concepts are divided up within a category rather than identifying specific brain areas of networks for different concepts

propose that each category is defined by a combination of a large number of factors (In addition to the sensory/function divide)
* Ex: color, motion, action-performed could all inform non-living artifacts

middle point between the two extremes

expression of living vs non living - you could end up with this result from many different reasons

56
Q

multiple-factor approach example

A

crowding: when different concepts within a category share many properties

ex: “animals” share “eyes”, “legs”, and “the ability to move”

started with brain damage leading to weird results

can create situations with living and non living simply by having the feature space and how much is it overlapping

more common features between items = more confusing

almost opporsite of priming