Lecture 22 - EE and Cognitive Architectures Flashcards

1
Q

Why cognitive architectures

A

Behaviour = architecture + content

*see slide

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Cognitive Architecture Features (*from SOAR ppl)

A

● Goal-oriented
● Based in environment
● Large amounts of knowledge*
● Symbols + abstractions*
● Flexible
● Learning

Simple reflex agent
environment, percepts, sensors, what the world is like now, action to be done
-influenced by production (if-then) rules)-
actuators, actions, to environment

Goal-Driven agent
environment, percepts, sensors, what the world is like now, what it will be like if I do action Aaction to be done
-influenced by states, how the world evolves, what my actions do, goals -
what action should I do now, effectors, to environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Cognitive architectures: SOAR

A

Simon’s symbolic model of emotion = top-down
Recap:
Goal Completion System:
● When complete?
● When satisficed?
● Timeout

SOAR = states with their attributes (ex: hunger…. Empty (value) stomach (feature)) + transitions (operators) (ex: eat) + new goal states with their attributes (ex: non-hunger…full stomach)

Acquire apple (operator)… operator is like the task to do
Can be programmed or let agent learn via exploration

Everything together = problem space

SOAR Components
● States. Ex: hunger
○ Capture … states of agent and environment
○ Symbols
-Discrete symbols (apple or not apple)
○ Internal/external “features” and their “values” (c.f. key-value pairs) ex: empty (value) stomach (key)
● Operators. Ex: acquire apple.
○ Latch onto symbols or features or values and transform them
○ …is where all the magic happens
○ Can be learned
● Problem Spaces
○ Define start + goal
○ Define what allowed states/operators are
16

Cognitive Architecture Features applied
● Goal-oriented
● Based in environment
● Large amounts of knowledge
● Symbols + (abstractions)
● Flexible NO!!!
● (Learning)

SOAR Criticism
● Rigid symbolic system, doesnʼt deal well with real world
● No goal-finding
-Doesn’t have it’s own drives, personal goals
● Needs large knowledge base
● Basically a programming language
● Non-discrete action/perception hard
-Doesn’t deal well with anything not discrete (ex: move arm 36.7 degrees… can’t. Unless specifically programmed to with a symbol)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Cognitive architectures: ACT-R

A

ACT-R Memory
Updates over time the more ex:cats (symbol) it encounters
Becomes better at recognizing the symbol of cat

See overview slide 26
Learning:
● Update decl. memory
● Update weights of
procedures (ex: with cat, loud 95% of the time)

ACT-R Criticism
● (Slightly less) rigid symbolic system
○ “Subsymbolic” operation
-Act-r still assumes everything is discrete symbols
-But less rigid than soar because it updates
○ Still heavy load on perception system
○ (No top-down pipeline)
● No goal-finding
● (Needs large knowledge base)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Cognitive Architectures: PSI

A

PSI= psychological framework
Micropsi= actual software implementation

See overview (slide 29)
Urges (drives)…..

Inner Screen” /
“Hypothetical World Model”

Current world model (situation image): ex: cat
Expectation horizon:
-pet cat… expected = angry cat
-feed fries… expected = happy cat
-throw yarn… expected = excited cat
observed: pet cat = happy cat:)

idrgi so rewatch

Hypothesis-directed Perception (HyPercept)
See cat foot.
Hypothesis: potatochild
then look at tummy
then look at face
-Bottom-up:
Hypothesis
queuing
-Top-down:
Verification

(Micro)PSI Criticism
● (Even less) rigid symbolic system
○ Still heavy load on perception system
○ (No top-down pipeline) CROSSED
● No goal-finding CROSSED
● (Needs large knowledge base) CROSSED

Psi is the most playful agent (reduction of uncertainty, curiosity, learning about the world)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Ecological embodied cognition (EE)

A

List features of a watermelon!
● Green
● Hard
● Sphere
● Bitter

List features of half a watermelon!
● Red
● Squishy
● Seeds
● Sweet
→ perceptual simulation

→ relation between what weʼre
doing and how weʼre doing it
Affordances

Tell me about a childhood
memory of a dentist visit!
-If sitting down in inclined chair, recall faster!!!
Tell me about a childhood
memory of a sports event!
-If standing, recall faster!!! (i think that’s what he said?)

Affordances
Action-based relationships between
organisms and objects.
“Things you can do with some object”
Significantly faster recall when hand posture and thing on screen matched (wrist vs. pinch… man-made (like hammer) vs. natural (like cherry tomato))

Knot thing:
a) analytical (think of how you would solve it)
b) pick it up until figure it out = ecological embodied cognition
your body can do the thing for you without much consciousness
sleepwalking, riding a bicycle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Classical CogSci vs Dynamical CogSci

A

Classical:
● Modularity
● Component-dominant
(information processed serially,
stop when stable)
● Internalism
(mind mostly isolated from env)
● Amodal
(abstract representation)
● Feed-forward
(stimulus triggers cognition, triggers
action)

Dynamical:
● Distribularity (interdep. systems)
● Interaction-dominant
(modules share info, work in parallel,
never reach stable state)
● Externalism
(mind in a body, in an environment)
● Modal
(grounded representations)
● Recurrent
(information flow fw/bw, large
feedback loops, time-dependence)
39

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Summary

A

● SOAR
○ States, Operators, Problem Spaces
○ Very rigid, basically a programming language
● ACT-R
○ Fundamentally similar to SOAR but allows for subsymbolic update
○ 3 types of memory: working, procedural, declarative
● PSI
○ New and improved: internal goals, urges
○ 3 world models: current world model (“situation image”), expected future (“expectation horizon”), observed
future (“inner screen”/”hypothetical world model”)
○ HyPercept: Bottom-up hypothesis queuing → top-down verification
● EE Cognition:
○ Affordances: action-based relationships between organisms and objs.
○ Tons of evidence for embodied, modal cognition
○ Dynamical CogSci: Distribularity, Interaction-Dom, Externalism, Modal, Recurrent
41

How well did you know this?
1
Not at all
2
3
4
5
Perfectly