Lecture Notes | Second Midterm Flashcards Preview

🚫 PSY100H1: Introduction to Psychology (Winter 2016) with J. Vervaeke > Lecture Notes | Second Midterm > Flashcards

Flashcards in Lecture Notes | Second Midterm Deck (54)
Loading flashcards...

John Locke: Blank Slate

  • John Locke, empiricism, and tabula rasca (blank slate); completely bottom-up from experience rather than top-down from concepts and expectations
  • problem of blank slate: the paradox of learning, i.e. something must be innate; how much can you start with absolutely no knowledge?
  • Locke argued that there are four principles of association:
    • similarity; e.g. apples and oranges
    • contrast; e.g. black and white
    • contiguity; e.g. students and seats
    • frequency; e.g. salt and pepper
  • events occur to an organism that starts out (innately) with the four principles of association
  • by associating events together the organism organizes experience into knowledge
  • this is completely bottom up view of learning was taken into psychology by behaviorists
  • they wanted to explain behaviour only using stimulus and response, i.e. no inner (non-scientific) processes; so no top down processes
  • but something must be innate—four principles of association identified with what Pavlov and Thorndike had found


Classical Conditioning: Similarity, Contrast, Contiguity, and Frequency

  • work done by Pavlov, so also known as Pavlovian conditioning
  • what was the nature of the pairing?
    • similarity, contrast, contiguity, and frequency
  • stimulus generalization: similarity between two stimuli
    • e.g. a bell with a similar tone to the first will also trigger the conditioned response.
    • but if we gave a shock every time the word “car” was said, would you generalize to “tar” or “truck”? you’d actually generalize to “truck” because you associate it with the meaning
  • stimulus discrimination: contrast between two stimuli
    • e.g. a bell with a very different tone won’t trigger the CR
  • contiguity; the presentation of the neutral stimulus and the unconditioned stimulus occur close together in time and space
  • frequency; Pavlov repeatedly paired the bell with the meat powder
  • these four models explain some types of conditioning, but not all; association is not enough to explain learning


Logical Problems with Classical Conditioning

  • similarity is not in the world, i.e. not in the stimulus
  • any two things can be very similar, if “similar” meanings sharing properties
    • e.g. “blue” and “shoe”, “blue” and “red”, “blue” and “blend”, etc.
  • there must be a top-down process that selects which properties to pay attention to when determining the relevant similarity.
  • associations are symmetrical but a lot of your thoughts are not; association = co-activation
    • e.g. Mary; Bill; loves—“Mary loves Bill” not the same as “Bill loves Mary”
    • your thoughts have a specific direction to them that associations do not
    • this direction is important since it affects the truth or falsity of your thoughts
  • so there seems to be a top-down process that imposes a logical structure on the stimuli in order to turn it into a thought (something that could be true or false, unlike an association such as salt and pepper)
  • Perceptrons: machines that simulated stimulus-response networks.
    • Minsky and Papert (1969) mathematically proved that there were many functions Perceptrons could not learn, such as exclusive “or”, yet even rats are capable of exclusive “or”, so something’s wrong
    • looks like behaviourist machines are too simple


Experimental Problems with Classical Conditioning

  • issues with frequency as a driver of conditioning
  • appetitive conditioning: when the US is an event that is usually considered pleasant, and that the organism seeks out
  • aversive conditioning: when the US is an event that the organism considers unpleasant and seeks to avoid
    • occurs much more rapidly than appetitive conditioning
    • often one or two pairings of the NS and the US are needed to turn the NS into a CS
  • it’s not simple frequency that’s at work; instead of association between the NS and US, the organism is using the NS as a signal for the US, i.e. using the NS to predict that the US will appear
  • prediction is not symmetrical like association
    • A predicting B doesn’t mean that B predicts A; no direction
  • organisms often use the NS as a predictive signal (predictive pattern detection) then signal detection theory applies
    • e.g. remember the gazelle and the noise in the bush
  • it’s not just simple frequency; the brain is doing signal detection criterion setting in order to deal with risk in prediction
  • there’s a lot of internal processing
  • the temporal arrangement of stimuli: delay, trace, simultaneous, and backward
    • if contiguity drives conditioning, the more contiguity between the US and CS should produce the most association
    • trace is often the best type of conditioning; it gives you time to predict and prepare for the US


Blocking | Experimental Problems with Classical Conditioning

  • an experiment by Kamin (1968) demonstrated blocking
    • 1st group: light signals US; light and tone signal US; tone does not become a CS
    • 2nd group: no light signal training; light and tone signal US; tone becomes a CS
    • for the first group, the light and the tone are redundant; the light already predicts the US, so they don't need the tone
  • latent inhibition: unfamiliar stimulus more readily conditions than a familiar stimulus
  • overshadowing: more salient member of a compound more readily conditioned as CS and interferes with condition of less salient member
  • however, not just prediction, but also preparation
  • Rescorla (1988) and Hollis (1997): argued that the CR is not a copy of the UR
    • e.g. rat sees a snake and it runs, but if it’s conditioned with a bell noise for snakes it freezes when it hears the bell
    • prediction + preparation = anticipation; the bell is being used to predict the appearance of a snake
    • the CR is the exact opposite of the UR
  • classical conditioning is not association but anticipation, it’s a much more intelligent process


Beginnings of Operant Conditioning

  • Thorndike (1911): did experiments where he put a cat in a box and had it pull a lever; this lead to the discovery of:
    • law of effect: the idea that responses followed by satisfaction will occur again and those that are not followed by satisfaction become less likely
  • Kohler and Sultan: from the gestalt tradition emphasizes top down processing and the importance of insight learning
    • Sultan the monkey learned how to pile up boxes in order to get to the bananas
    • he demonstrates an S-curve with his learning, i.e. insight; once he learns how to do it, he keeps doing it
  • B.F. Skinner: was the most important of the behaviourists and greatly refined the puzzle boxes of Thorndike
    • these boxes were called operant conditioning chambers, or The Skinner Box


Rambaugh et al.: Insight and Abstraction

  • the presence of both insight and abstraction in learning also emphasized by recent work in operant conditioning specifically in the salience based theory of learning of Rumbough et al. (2007)
  • conditioning is based on what the organism finds salient
    • e.g. rats trained to run a three dimensional maze will spontaneously find short cuts by jumping between levels, for which they were never trained
    • the rats restructure what information is salient to them in order to produce insightful solutions beyond what they were trained/conditioned
  • so again, what we have is the intelligent, even insightful, anticipation of the world in which internal processes of information selection (salience) are interacting with internal processes of abstraction and insight in order to produce behaviour
  • Rumbough et al.: “What is trained does not necessarily equal to what is learned.”
  • all of this points to conditioning not following well the behaviourists model of learning
  • instead learning seems to be a process involving the interaction of top down and bottom up processes


Bandura: Observational Learning and Modelling

mirror neurons and imitation

  • when children human and chimps are given boxes that contain candy, humans will try to do every single step whereas the chimps skip to the last one to get to the candy faster
  • this is because the children are assuming that the adults know more than they do — they gain more in the long run, about culture and social norms; the chimps are getting short-term rewards of candy

empathy, mindsight, and mindsight resonance

  • your ability to introspect what’s happening within yourself makes you better at telling what’s going on with other people
  • when two people do the same things together (e.g. nod your head, doing the wave) they tend to like and trust each other more


Harlow: Monkeys and Mindfulness

  • Harry Harlow (1949): monkeys capable of more abstract learning
  • +stimulus covers a raisin; -stimulus does not
  • the monkeys had to move the object to uncover a raisin
  • problems:
    • only six trials; too short for operant conditioning
    • also a previous +S can become a -S, and vice versa 


Shiffrin and Atkinson Model of Memory


Sensory Memory

  • sensory memory: a memory system that momentarily preserves extremely accurate representations of sensory information
  • information that is not quickly passed to short-term memory is gone forever
  • iconic memory lasts about 1/3 of a second
  • echoic memory lasts about 2 seconds
  • main function is to give us continuous scenes and sentences
    • iconic memory: when you look around, you only pick up on a few things you focus on, but having this memory allows you to put things in a big picture; evolutionary advantage that would allow us to know what are in our surroundings (e.g. the tiger creeping up behind you)
    • echoic memory: when you’re listening to things, the sounds coming out of someone else’s mouth needs to stay in your mind long enough for you to process things


Implicit Learning and "Psychic Powers" | Sensory Memory

Reber and artificial grammar learning, and ‘psychic’ powers

  • e.g. give people a random string of numbers and letters: e.g. 1aaddf3jkll
  • you ask people to tell the similarities between the sets of strings they’re given: they’ll say either (1) ‘I don’t know’ or (2) give you a rule that doesn’t actually do it (e.g. there’s always two vowels in a row)
  • if you then give them the rule, people do worse at telling if it works than if they’re doing it implicitly
  • ‘psychic’ phenomena; e.g. ‘I can feel when I’m being stared at’
    • they bring in people to stare at them and they were scoring well above chance
    • in a replication, people were not given feedback (‘yes, you’re right’ or ‘no, you’re wrong’) and their performance dropped to chance level
    • this is because the experimenters were bringing people in using a complex pattern, and the participants were using implicit learning to pick up on the pattern
    • most instances of psychic experiences are a result of implicit learning


STM: Working Memory

  • at most, you can remember 4 ± 2 things
  • the number of things you can hold varies considerably based on the chunking phenomenon; chunking makes information co-relevant to one another
    • e.g. UNICEFFBICIANATO is harder to remember than if you broke it down into UNICEF, FBI, CIA, and NATO; it’s more relevant to you
  • chunking indicates that working memory can process a lot more information if it has been patterned
  • working memory is a relevance filter more than a holding space
  • main job of working memory is not so much to hold things for offline manipulation but more to select from all the information, what the relevant information is that is going to get into long term memory
  • things lines up with four things:
    • measures of working memory correlate very strongly with measures of high general intelligence
    • working memory has a high overlap with consciousness
    • (two above things—many people argue has to do with filtering for relevant information)
    • learning is salience based,
    • and what information, the ‘meaning’, gets into long term memory
  • with time and age, your working memory begins to decline; it seems harder for them to stay on topic, their filters aren’t working as well as your younger one
  • but keeping a lot of stuff that seems irrelevant can help older people see things deeper than younger people don’t; they tend to seem wiser
  • there’s some good evidence that long-term mindfulness practice will improve your working memory


Long Term Memory

  • it doesn’t seem that there’s a capacity; e.g. nobody says ‘Don’t tell me anything else, I won’t be able to remember it!’
  • your memory is not about an accurate remembering of the past, but an intelligent prediction of the future
    • you don’t want to be an accurate but stupid video recorder, rather a relatively inaccurate but highly intelligent problem solver
  • the ‘common-sense’ idea that memory is about recalling things perfectly is false
  • 49% of confident eye-witness testimonies are false
    • this is why we can’t have capital punishment; our memory isn’t good enough to be able to say that people are truly guilty


Sachs (1967): Sentence Recognition | Long Term Memory

  • Sachs, 1967: experiment with sentence recognition
  • you have people reading a text, stop them, and ask ‘Is this the sentence you just read?’
  • change in form noticed but not change in meaning
  • this is why your brain is very misleading


Bransford and Franks (et al.) | Long Term Memory

  • Bransford and Franks (et al.): work shows that people often ‘remember’ a more meaningful thing that they never encountered
  • you show people a bunch of random dot patterns
  • then show them a pattern that represents the average of all the patterns they saw, but which they never saw
  • people will report strongly remembering having seen the average pattern
  • that pattern wasn’t seen, but is the best anticipation for what might come in the future; it’s the best generalization for the future


Lotus and Palmer | Long Term Memory

  • transfer appropriate processing means that the brain will adjust to solve a project
  • Loftus and Palmer: participants viewed a video of a car crash
  • two conditions:
    • control question: How fast were the two cars going when they contacted each other?
    • leading question: How fast were the two cars going when they smashed into each other?
  • smashed = 65.7 km/h ; contacted = 51.2 km/h
  • the wording of questions can alter memories; children are particular susceptible to this 


Martensville, Saskatchewan (1992) | Long Term Memory

  • one woman accused another who ran a daycare of sexually abusing her child
  • no parents had actually seen evidence of mistreatment
  • none of the children had complained to their parents
  • police investigated, questioning the children who went to the daycare; nothing suspicious was reported
  • after repeated questioning by a rookie officer with only 7 months of experience, children began to report sexual abuse; allegations began to snowball
  • during the investigation, the children at the day-care center were repeatedly questioned about the abuse by the police, who used suggestive techniques to obtain testimony of abuse
  • rumors of child mutilation and other obscene things surfaced; rumors of a Devil Church and cult where police were abusing children undercover
  • the RCMP arrested 9 people, 5 of them police officers, and charged them with over 100 counts of sexual and physical abuse
  • a young man visited children at their preschool, read them a story handed out treats, did nothing aggressive, inappropriate, or insulting
  • they came back a week later and asked the children two sets of leading questions (1) whether or not something happened (2) questions about what other children had said
  • the number of children who said "yes" to leading questions skyrockets to 70-80% in the second condition 


Craik and Lockhart: Levels of Processing | Long Term Memory

  • the more meaning oriented someone’s processing is, the more likely they will remember the information processed
    • the intention to remember does nothing to actually help you remember things
  • this was the central claim of Craik and Lockhart’s theory called Levels of Processing; consider these three sentences:
    • Is the world table in capital letters?
    • Does the word blue rhyme with the word shoe?
    • Does the word friend make sense in the phrase "he really likes his friend"?
  • then, unexpectedly they were given a word recognition task; they’re asked if they saw the bold words in previous sentences ‘Did you see the word table, friend, blue, etc.?’
  • people remembered words much better in the third type of sentence because they were processing meaning in a deeper fashion
  • problem is, what does term ‘deeply’ mean?
    • it’s a spatial metaphor for being more meaningful, but what is the independent measure that something is more meaningful?
    • if meaningful just means remembered better, then we have a circular explanation — ‘things are remembered better because they’re remembered better’
  • is how meaningful something is a constant property of that thing?
    • e.g. there’s someone who ‘means the world to you’ but you drift apart and five years later you see a photo of them and you think, ‘What? What did I see in them?’
    • so, meaning is probably not a constant property


Transfer Appropriate Processing | Levels of Processing

  • Morris, Bransford, and Franks (1977): repeated the standard levels of the test (‘did you see the words?’), but had another condition:
    • e.g. ‘did you see a word that rhymes with blue?’
    • now, participants remembered more of the words that had been processed in the second type of sentence
  • memory seems to work more in terms of transfer appropriate processing, i.e. the brain is trying to pick up on patterns that will transfer well to the future
    • i.e. the more how you’re remembering is similar to how you’re going to have to recall it in the future, the better you’ll remember it
  • the brain’s trying to find information in the current context that will be relevant to its retrieval later on
  • the problem facing your brain is that it can’t always tell how it will have to process things in the future, so it grabs onto the most general thing it can
  • memory as reconstructive rather than reproductive
    • your brain doesn’t store all of the information, it just gets the key information and some rules/principles that allow it to reconstruct things later
    • this is adaptive; instead of carrying around all the materials for your shelter every time you move, you only need to carry the essential ones to reconstruct it so that it won’t weigh you down


Carmichael et al. (1932) | Levels of Processing

In this experiment, people were given pictures with one set of words. They went away for a week, and when they came back were asked to draw the pictures they saw the week before, as accurately as possible.


State Dependent Memory: Bridging Between Concepts | Long Term Memory

places are retrieval cues

  • place characteristics like sound, visual features, room size, odors, etc. all get encoded with the material you’re remembering
  • e.g. memories when you go back to your childhood home, school, street, etc.
  • e.g. traumatic memories and triggers

Smith et al. (1978): people were given 80 words

  • you allow people to return to the same room they memorized them, and they remembered 49 words
  • you make people go to a different room after they’ve memorized these words, and they remembered 35 words

even state dependent memory effects have been found

  • be careful when you’re hiding something; the state you're in now may change, and you may not remember where you hid something later
  • e.g. if you lose your keys when you're drunk, just try getting drunk again to find them


Types of Long Term Memory


Different Kinds of Knowing

  • propositional knowing: something that’s the case, works in terms of truth conditions; great for learning rules
  • procedural knowing: how to do something, works in terms of appropriacy conditions; great for learning routines
  • perspectival knowing: what it’s like to be x, works in terms of identity assumption conditions; great for learning roles


Memory and Problem Solving

  • Weisberg and Alba (1981): told people to "think outside the box" in the nine dot problem, but it wasn’t effective
  • Casselman (1970): tried a different clue; simply drew a huge box around the nine dot problem on the page, which worked!
  • learning, memory, and problem solving are all interlinked in terms of the brain trying to find relevant information for achieving its goals
  • memory is very intertwined with learning, and problem solving
  • memory has a lot more to do with the future than the past


Newel and Simon: The Formulization of Problem Solving

a problem is made up of four components:

  • the goal state; e.g. I want to be not hungry
  • and the initial state; e.g. I’m hungry
  • (there’s a problem when the goal state and initial state aren’t the same)
  • search-space (or problem space): the number of different paths you could take in order to get from initial state to the goal state
    • the problem with this model is that it’s misleading; you’re not looking at things from a birds-eye view, you’re in one of those points
    • F^D: size of a search space, where F is the number of operators and D is the number of steps
    • e.g. average chess game: 4.239 x 10^88
    • compared with 10^10, which is the approximate number of neurons in the human brain, versus 5 x 10^14 which is the approximate number of synaptic connections in your brain, versus approximately 10^80 number of atoms in the universe
    • so there aren’t even enough atoms in the universe to represent the number of moves you could make in a chess game
  • operators
  • path constraints: there’s always more than one problem
    • if your problem is cooking dinner, you could solve it by burning your house down; but you probably don’t want to do that because your house solves a lot of other problems for you
  • a problem solution is a sequence of operations that turns the initial state into the goal state while obeying the path constraints
  • a problem solving method is any technique that finds a problem solution


Algorithms and Heuristics | Memory and Problem Solving

  • algorithm: a problem solving technique or method that is guaranteed to find a solution
    • so there’s an algorithm for determining the number of people in the room (e.g. counting)
  • heuristic: a problem solving technique/method that increases the chance of finding a solution
    • think of some heuristics in chess; they can increase your chances of winning, but there’s still a chance you can lose
  • for many problems we can’t use algorithms because we face what Cherniak called the finitary predicament
  • first related point: given combinatorial explosion, and an algorithm requiring an exhaustive search space, the use of an algorithm commits the problem solver to a solution attempt that for all practical purposes is impossible; we don’t have enough time or resources (think back to the size of the search space of a game of chess)
  • second related point: since most of the time we have to use heuristic, we face the consequence of their use
    • heuristics try to pre-specify what information is relevant, they try to pre-judge what information will be relevant; source of prejudice or bias
    • e.g. you drive your friend to the airport and tell them to have a safe trip, that you hope they don’t crash; and then you get back into your car, which is the number one killer in America, but don’t worry about dying
  • deep fact: relevant is not in the world, so no piece of information is intrinsically, or always, relevant; think of the nine dot problem and how the shape was actually irrelevant
  • so something other than heuristics at work when we are problem solving
  • what seems to be at work is problem formulation: how the initial state, goal state, operators, and path constraints are represented, this seems to really effect the size and shape of the search space


Kaplan and Simon (1980): The Mutilated Chessboard Problem

  • problem formulation/framing needs to be recursively self-correcting; this may have connections to working memory as a higher order relevance filter and also to intelligence
  • humour is a quick marker of social awareness and insight problem solving, which are really adaptive, which makes people seem more attractive
  • what improves insight? insight is mostly a matter of attention and salience re-construal


Knoblich (1999, 2001): Chunk Decomposition | Insight

  • Knoblich and associates 1999, 2001 chunk decomposition and constraint relaxation
  • reordering matches to make a true statement; people tend to move the matches associated with the numbers rather than the operations
  • if you’re good at chunk decomposition, you’re also probably good at insight


DeYoung, Flanders, and Peterson (2008): Frame Breaking | Insight

  • frame breaking: breaking up a problem formulation
  • the anomalous card sorting task—flash a few anomalous cards (e.g. black cards that are of the heart suit, which are supposed to be read) among regular cards and identify when an anomalous card goes by
    • the better you are at identifying anomalous cards (frame breaking), the better you are at solving the 9-dot or mutilated chess board problem
  • note how this isn’t related to how well you reason, but how well you re-construe what information is relevant
  • global processing of gestalts --> local processing of features

Decks in 🚫 PSY100H1: Introduction to Psychology (Winter 2016) with J. Vervaeke Class (50):