Midterm 1 Flashcards

1
Q

Wilhelm Wundt

A

Start of experimental Psychology.
Separation of psychology from philosophy.
Measures reaction times and word associations.
Procedures to describe sensations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

John Watson

A

Behaviorist Manifesto

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Behaviorism

A

Replaces subjective introspection with objective observation of behaviours.
People were thought of as “responders” responding to conditioning through experiences.
Learning occurs as a result of the consequences of behaviour.
Behaviourists do not believe in mental phenomenas like knowledge, thinking, intention,…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Tolman: Cognitive maps

A

Organisms take in information from their environment and build up cognitive maps as they learn (latent learning)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Latent Learning

A

Tolman.
Experiment with 3 groups of rats in a maze, no reinforcement, regular reinforcement everyday, reinforcement only after day 11.
Group 1 doesn’t learn anything.
Group 2 shows steady improvement.
Group 3 shows sudden improvement on day 11, they learnt things in the first 10 days but only started using the knowledge once reward arrived.
The cognitive maps are inferred from the rat’s behaviours as they cannot be directly observed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Minsky

A

Believed that you need systems of hierarchy to represent the world.
Computer program that would be able to use terms of relations such as inside, to the left,…
The most important thing is the idea of representation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Skinner’s verbal behaviour

A

A behaviourist explanation of language development.
Language is a way to get other individuals to do something. All other functions (logic, communication) are only derivatives.
Should only think of language behaviours as the measurable effects they ave on human interaction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Noam Chomsky

A

Negative review of Skinner’s “Verbal Behaviour”
He argues language cannot only be learnt through reinforcement, there must some innate structure because children are not exposed to every single possible combinations of words by the time they can speak (poverty of the stimulus)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Poverty of the stimulus

A

Children are able to produce an infinite number of sentence even though they only heard a finite amount of them until now

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Constraints and principles cannot be learnt

A

children do not “know” anything about grammar or syntax but still they can produce grammatical sentences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Chomsky’s LAD

A

Children are born with a Language Acquisition Device.
Unconscious process inside the child’s mind.
This is how children produce sentences they have never heard before (ie no eat cake/he hitted).

Criticism: It’s a black box how do we know how it works? Languages of the world are so diverse that such universality is rare. Only addresses syntax and not semantics (meaning of words)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Bottom-Up processing

A

Starts with sensory data (distal stimulus) and builds up representation.
STIMULUS -> ATTENTION -> PERCEPTION -> THOUGHT PROCESS -> DECISION -> RESPONSE
ie the letter A is a black blotch broken down into features by the brain and then perceived as the letter A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Top-down Processing

A

Starts with the expectations and contexts to help interpret incoming data.
Key difference: Later stages of processing affect earlier stages (ie PERCEPTION affects ATTENTION, and THROUGH PROCESS affects PERCEPTION)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Top-Down 3D perception

A

We use light sources to infer depth.
Single Light Source Assumption: our visual system’ interpretation is constrained by the rule that there can only be one light source. We also have a built-in assumption that light is coming from above.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Parallel vs Serial Processing

A

There is a big speed advantage in parallel.
In serial processing, the mind can only deal with one piece of information at a time, therefore when a lot of information is received, a bottleneck forms and slows down decision making.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Sternberg Paradigm

A

Give subjects a short list of letters to memorize. Have to quickly decide if an item is new or part of the list.
Serial process: items are tested one at a time, predicts a linear increase in response time as set size increases
Parallel process: items tested simultaneously, time would not increase as set size increases according to Sternberg.
Results: reaction time is linear, therefore concluded people think in a serial manner

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Double-Stimulation Paradigm

A

You receive a first input X and are still processing it when you receive a second one Y. You can’t deal with Y until you are done with X. The separation between the onsets of the two stimuli is called SOA (stimulus-onset asynchrony).
As SOA increases, the reaction time for Y decreases because you have more time process. This is known as PRP (psychological refractory period), where you need this refractory period to clear out X.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Parallel and Interactive processing

A

Perception of visual event affected by perception of auditory events (ie hearing 2 beeps when a circle flashes once makes you think it flashed twice)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

McGurk effect

A

Lip movement says “ga”, speech sound says “ba” and we perceive “da” which is a mixture of both

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Identifiability

A

The ability to specify the correct combination of representations and processes used to accomplish a task

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Marr’s approach to how the mind performs information processing

A

Different explanatory tasks at different levels:
- Computational level (WHY, goals): identify the specific information-processing problem that the system is configured to solve, and identify general constraints to solutions to that problem
- Algorithmic level (WHAT): explains how the system actually performs the task (identify inputs/outputs, algorithm for transforming input to output, specifies how information is represented). Also known as information processing level.
- Physical Implementation level (WHERE, HOW): physical realization of the system, identify neural structures realizing the basic representational states to which the algorithm applies, identify neural mechanisms that transform those representational states according to the algorithm

A complete understanding at one level may not be enough to fully explain an information processor of interest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Calculator example

A

Physical Implementation level: explain the physical operations through transistors, resistors ect… -> you now know the physical processes but you don’t know what it is actually doing
Computational level: specify the laws of arithmetic and how the calculator conforms to these abstract laws -> you are missing how it is doing it at the most basic level
Algorithmic level: explain the information processing steps carried out by the addition algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Marr’s Analysis of visual system

A

Identified two key jobs for the visual system:
- provide a 3D representation of the visual environment
- provide object-centered rather than viewer-centered frame of reference.

Computational level: input is light arriving in the retina, output is a 3D representation of environment. Goal is to infer surface boundaries from 2D image.
Algorithmic analysis: compute zero crossings over image
Physical Implementation: wiring P, Q and AND cells in early visual cortex

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Marr’s stages of visual processing

A

IMAGE BASED PROCESSING (some pattern of light, pixels) -> SURFACE BASED PROCESSING (differentiate surfaces) -> OBJECT BASED PROCESSING (recognize object) -> CATEGORY BASED PROCESSING (what category it fits in)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Representational primitives

A

the building blocks of the image representation at each level of processing (they allow structure to be imposed at the next level of processing). Intensity of light values at each point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Zero crossings

A

Sudden intensity changes, along a horizontal line (horizontal intensity profile).
Magnitude of the 1st derivative can be used to detect presence of an edge.
Second derivative produces two values at every edge.
The zero crossing point indicates where the light intensity change happened, the location of the surface bundary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

P, Q, AND

A

P-type cells fire when the center of their receptive fields is stimulated.
Q-type cells fire if the center is not stimulated
AND-type cells respond to the firing of adjacent P and Q cells when they are firing at the same time. It detects a zero-crossing when P and Q are firing at the same time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Criticism of Marr

A

Underdetermination
Insufficient information in the image to invert the process and recover a full description of the scene
Necessitates auxiliary assumptions about the world to solve the problem
His assumptions did not actually reflect natural constraints, if we used strictly his algorithm we wouldn’t be able to tell the striped cup shadows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Representation

A

a symbol or thing which represents something else.
Tolman though rats used a mental representation of the maze in their head to navigate.
Necessary element:
1. A represented world: the domain that the representations are about. One set of representations can be about another of representations.
2. A representing world: the domain that contains the representations.
3. Representing rules: map elements of the represented world to element in the representing world
4. A process that uses the representation

The relation among elements in the representing world is arbitrary and could have occurred in some other way had the representing rules been differently constructed (greek symbols mean different things in Greece than in Math equations)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Representing rules

A

There is an Isomorphism between the represented and representing worlds if every element in the represented world is represented by a unique element in the representing world.
There is a Homomorphism if two of more element in the represented world are represented by one element in the representing world

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Temperature example

A

Water at different temperatures is the represented world.
Possible analog representations are: the level of red liquid in a thermometer, the digital number on the thermometer, or it could represented with darkness of squares.

Analog: the heigh of the liquid column of mercury is an easy representation for direct comparaisons between two temperatures.
Symbolic/Numeric: you need to understand how numbers work in order to compare two different temperatures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

How do representational formats differ from one another?

A
  1. Duration of representational states: focus on either transient (does not last, keeps changing ie temperature) or enduring states (childhood memory)
  2. Abstractness of representations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

MDS, Multidimensional Scaling

A

A technique that takes on as input the distances between each pair of points and produces a map that locates all the points in space.
This map can be used to see how fast people will link a item to a category, the closest the points on the map the faster.

34
Q

What is information?

A

The resolution of uncertainty.
As long as a signal resolves uncertainty then it has information.
Surprising events contain more information than ordinary or excepted event. Less frequent contains more information than more frequent.

35
Q

Claude Shannon

A

Revolutionized the way we think about information.
Coined the term “bit”: a way of quantifying information regardless of its form.
Showed information is measurable

36
Q

Information from a coin flip

A

Fair coin (50/50): 1 bit of information
Heads only (1/0): zero information
Biased coin (90/10): between zero and 1 bit of information

37
Q

Self Information

A

I(ai) = -log2(pi)

(the lower the probability, the more extreme log2)

Information you get from one specific outcome

38
Q

Information Entropy

A

The information contained in a source X is a “weighted’ average of the self-information of each of the possible symbols:
H(X) = -∑ pi log2(pi)
The Information Entropy, H(X), quantifies the amount of information in the source X.

Information of the whole source (different from self information)

39
Q

Hick-Hyman Law

A

The more possible responses there are, the longer it will take to choose the correct response.
Humans have a non-zero reaction time, meaning that first stimulus is perceives, and then a decision is made in response.

RT = a + bH(T)
* H(T) is the transmitted information (Entropy!)
* b is the rate of gain of information (lower is worse)

40
Q

Fitt’s Law

A

Movement time depends on relative precision (ratio of Distance/Width of target)

41
Q

Analytical engine

A

By Babbage, a device capable of calculating any mathematical function, a “general purpose computer”
Never completed it.
Storage (sore) and calculation (mill) were separated
Used punch cards

42
Q

The first Program

A

Ada Lovelace, her program turned a complex formula into simple calculations represented as punched cards fed into the analytical engine

43
Q

Alan Turing

A

Broke the Enigma machine in the war
The Turing machine is an idealized mathematical abstraction of a computer. It consists of dimensional tape of cells, a read/write head, and a control program.
In a modern computer, the control program would be the program, the head would be CPU, and the tape would be the hard drive.

Turing was the first to see that the physical construction of the machine is not important. He was also the first to propose that mental activity is computation

44
Q

Time Complexity

A

The worse case number of steps required to complete on an input of length n (one event is the head moving one position on the tape). Or, how much more time does the program take when you increase the length of the input.

45
Q

NP Hard

A

NonDeterministic Polynomial time
A problem in which the time required to solve grows exponentially as the problem size grows, because its time complexity cannot be expressed as a polynomial.

46
Q

The Turing Test

A

If a computer can pass as human in an online chat, we should grant that it is intelligent.

Turing defines intelligent behaviour as the ability to exhibit human-like performance, sufficient to fool a judge.

Goostman at the Royal Society was portrayed as 13yo ukrainian boy, and fooled 33% of judges

47
Q

Criticism of Turing Test

A
  • A computer may pass the test without real understanding of the conversation
  • Many real people might fail the test
  • A computer can exhibit intelligence in other contexts
48
Q

Searle’s Chinese Room argument

A

Criticism of Turing Test.
It is possible to create a system that exhibits intelligent output without understanding (made people pass the test as humans using chinese symbols they did not understand)
He argued that mere symbol manipulation could not generate intentionality.

49
Q

Intentionality

A

the power of minds to be about, to represent, or to stand fo things, properties, and states of affairs

50
Q

Functionalism

A

Mental states are functional states, we can analyze them in terms of inputs and outputs.
Inputs include perceptual stimuli and other mental states
Outputs include behaviour and other mental states.
The Chinese Room Argument is a refutation to this.

51
Q

Absent qualia

A

It is possible for something to be functionally equivalent to a human being and yet have no conscious experience

52
Q

Qualia

A

“what is it like?”

Qualia are intrisic, non-representational properties of experience

53
Q

The Hard Problem of consciousness

A

The question of how physical systems give rise to subjective experience
Not resolved.

54
Q

General Approach to Vision

A

Low Level: visual feature extraction. Considers local properties of an image
Mid Level: finding edges and grouping features, segmentation
High Level: recognizing objects

55
Q

Visual Acuity

A

The spatial resolving capacity of the visual system, the ability tp discern the black and white bars from each other as they get smaller (how we test it).
Expressed in cycle per degree. (Humans have about 70 cycles per degrees, hummingbird 5, butterfly 0.7, bees 0.5, Hawk 140)

56
Q

Angular separation of photoreceptors, Δφ

A

Determined by the density of photoreceptors per degree of visual angle.
The more photoreceptors that view a scene per unit angle, the higher acuity can potentially be
Smaller Δφ means higher visual acuity!

57
Q

Early visual processing

A

Area V1 contains a complete map of the visual field covered by the eyes.
Receives its main visual input from the lateral geniculate nucleus (LGN), and sends its main output to subsequent cortical visual areas…
Hubel & Weisel Discovered key properties of V1 cells by recording electrical activity in animals

58
Q

Simple/Complex cells

A

Complex cells don’t care about location within the receptive field, but it cares about orientation.
Most cells in V1 are complex.
They are Like simple cells in that they respond best to straight-line stimuli in a particular orientation and width but location is less specific.

59
Q

What happens to the orientation tuning curve if we change the contrast of the stimulus?

A

Width of the orientation tuning curve varies little as the contrast (strength) of the stimulus is varied.
- Only the height of the tuning curve increases with contrast.
-> Contrast Invariance
they all have the same peak so it does not change

60
Q

The Efficient coding hypothesis (Barlow)

A

Neurons should encode as much information as possible in order to most effectively utilize computing resources.
Maximizing a neuron’s information capacity ensures that all response levels (i.e. firing rates) are used with equal frequency.
According to Efficient Coding, firing rates of neurons are optimized to efficiently represent naturally occurring stimuli.
(CUMULATIVE DISTRIBUTION FUNCTION IS A S SHAPE)

61
Q

Divisive Normalization

A

As S goes up, R goes down
S= the pooled activity of multiple neurons including the neuron being normalized
R= the neuron’s response.

Without divisive normalization, the neuron’s response increases monotonically with stimulus size and saturates.
With divisive normalization, the response first increases with stimulus size but then decreases, resulting in a “preferred” size.

62
Q

Two Approaches to Recognition

A
  • Template Matching (view-based recognition): Templates are holistic entities that are compared to input patterns to determine amount of overlap.
    Template matching works well in standardized, constrained contexts. The difficulty is that contexts are rarely constrained. Templates are not inherently viewpoint invariant.For every different possible view, there would have to be a different template (replication) !
  • Recognition by components: Recognition by components proposes that objects are represented by a finite number (about 36) of shape primitives (called geons). These can be combined in different ways (different structural descriptions) to yield an infinite number of objects.
63
Q

“Recoverable” vs “Non-recoverable” Objects

A

Vertices contain information about the relations between geons. Removing those line segments would be particularly harmful to recognition (vs removing midsegements which don’t affect recognition as much)

64
Q

Experiment RBC

A

Error rates are low for partial objects that consisted of as few as 4 (of 9)
components
There was some improvement as the number of geons increased, consistent with RBC.

65
Q

Recognizing faces vs objects

A

M shape function when recognizing objects (as they rotate), less affected by illumination -> recognition by components
Inverted U shape for faces, more affected by illumination -> recognition by templates

66
Q

Functions of categories

A
  • Classification - allows us to treat different things as the same
  • Communication - we communicate using words that refer to more abstract ideas/concepts
  • Prediction and reasoning - we can use categories to make predictions about unknown or unseen parts of the world
  • Conserve “mental space”
67
Q

Theories of category learning

A
  • Stimulus-response association
  • “Classical view”
  • Prototype model
  • Exemplar model
68
Q

Stimulus-response learning (Hull, 1920)

A

Passive (unconscious) learning to associate physical stimulus with a category label response

69
Q

The Classical view (Bruner, 1956)

A

Learning a category means finding the rule for determining whether something belongs in the category
Category learning involves active hypothesis formation and testing
Categories represented by rules:
* Rules define necessary and sufficient features
* Necessary feature: If something is a member of Concept C, then it must have Feature F
* “Yellow” Is necessary for concept Canary,“smelly”forSkunk
* Sufficient feature: if something has Feature F, then it must belong to Concept C
* “Eyes that see”is sufficient for concept Animal

70
Q

Problems with the classical view

A

Can’t specify defining features
People disagree with each other about categories
People also disagree with themselves
Typicality is graded

71
Q

Prototype Theory

A

According to prototype theory, the mental representation of a category consists of a prototype or central tendency of the examples
Learning is about abstracting this schema or prototype across all the examples you have see so far.

72
Q

Problems with prototypes

A

Central tendency is inappropriate sometimes
Category variability information is important
Prototypes discard information about specific instances

73
Q

Exemplar theory

A

A category is simply represented by all of the members (exemplars) that are in the concept
Uses the total similarity of an object to all members of the category to determine if the object belongs in the category

74
Q

House 2019: Language Robot

A
  • The greatest potential loss in our relations to machines is not runaway GDP or disinformation, but rather the existential right to enjoy the surprise and uniqueness of human effort
  • What, if anything, precedes the kernel of that first word somewhere between a writer typing it and the invention of the universe? (What, if anything, made it so likely that the final word of the previous sentence — and, as you may by now have intuited, also this one — was almost certainly going to be “universe”?)
    The short answer, for robots and humans alike, is training.
  • For dynamic motor tasks, where the goal is also moving as the arm does, the brain must be able to update its reach as it reaches. One need only apply this same motoric prediction to the statistics of a sentence to see how the brain might coordinate both.
  • machines as they exist today cannot be conscious. As weather simulators do not contain in them anything that is “wet,” computers that simulate behavior or intelligence likewise do not contain in them anything that has the causal properties of being “conscious.”
  • the ability to passably imitate a human in conversation should be, for a practical definition of “thinking,” good enough.
  • As saltwater looks like lake water but provides its opposite in hydration, the robot’s prose can appear at first glance to have meaning but is, in these cases at least, devoid of clarity, philosophical structure, or fact.
75
Q

Silverman: Exploring inner space

A
  • This interdisciplinary approach has since become known as cognitive science. Unlike the science that came before, which was focused on the world of external, observable phenomena, or “outer space,” this new endeavor turns its full attention now to the discovery of our fascinating mental world, or “inner space.”
  • The term cognitive science refers not so much to the sum of all these disci- plines but to their intersection or converging work on specific problems
  • Information is “input” into our minds through perception—what we see or hear. It is stored in our memories and processed in the form of thought. Our thoughts can then serve as the basis of “outputs,” such as language or physical behavior.
  • four categories of representation: single words, propositions, rules, and analogy
  • There are four crucial aspects of any representation: representation bearer, content, grounded, an interepreter
  • Intentionality is considered to have at least two properties. The first is iso- morphism, or similarity of structure between a representation and its referent. A second characteristic of intentionality has to do with the relationship between inputs and outputs to the world. An intentional representation must be triggered by its referent or things related to it. Consequently, activation of a representation (i.e., thinking about it) should cause behaviors or actions that are somehow related to the referent.
  • This relation between inputs and outputs is known as an appropriate causal relation.
  • The use of both digital/symbolic and image representations collectively has been referred to as the dual-code hypothesis
  • According to the propositional hypothesis, mental representations take the form of abstract sentence-like structures.
  • A predicate calculus is a general system of logic that accurately expresses a large variety of assertions and modes of reasoning. The proposition “Mary looked at John” can be represented by a predicate calculus such as: [Relationship between elements] ([Subject element], [Object element]), where “Mary” is the subject element, “John” is the object element, and “look- ing” is the relationship between elements.
  • According to the tri-level hypothesis, mental or artificial information-processing events can be evaluated on at least three different levels (Marr, 1982). The highest or most abstract level of analysis is the computational level. At this level, one is concerned with two tasks. The first is a clear specification of what the problem is. Taking the problem as it may have originally been posed, in a vague manner perhaps, and breaking it down into its main constituents or parts can bring about this clarity. The second task one encounters at the computational level concerns the purpose or reason for the process.
  • inductive reasoning. They make observations about specific instances in the world, notice commonalities among them, and draw conclusions
  • One logical inference is called a syllogism. A syllogism consists of three propositions. The first two are premises and the last is a conclusion.
  • A production rule is a conditional statement of the form: “If x, then y,” where x and y are propositions. The “if” part of the rule is called the condition. The “then” part is called the action. If the proposition that is con- tained in the condition (x) is true, then the action that is specified by the second proposition (y) should be carried out, according to the rule.
  • Declarative knowledge is used to represent facts. It tells us what is and is demonstrated by verbal communication. Procedural knowledge, on the other hand, represents skill. It tells us how to do something and is demonstrated by action.
  • Thinking analogically involves applying one’s familiarity with an old situation to a new situation
76
Q

Appropriate causal relation

A

Relation between inputs and outputs

77
Q

Dual Code Hypothesis

A

The use of both symbolic and image representation

78
Q

Propositional hypothesis

A

Mental representations take the form of abstract sentence-like structure

79
Q

Syllogism

A

A syllogism consists of three propositions. The first two are premises and the last is a conclusion.

80
Q

Declarative vs Procedural knowledge

A

Declarative knowledge is used to represent facts. It tells us what is and is demonstrated by verbal communication. Procedural knowledge, on the other hand, represents skill. It tells us how to do something and is demonstrated by action.