Ch.2 Flashcards
(33 cards)
Experimental designs
When performed correctly, they permit cause-and-effect inferences. Researchers manipulate variables to see whether these manipulations produce Differences in participants behavior. Putting it another way, in correlational designs the differences among participants are measured, whereas in experimental designs they’re created.
What makes a study an experiment:
2 Components 
- Random Assignment of participants to conditions.
- Manipulation of an independent variable.
Both of these are necessary, if I study doesn’t contain both, it’s not an experiment.
Random Assignment
Experimenter randomly sorts participants into one of two groups. By doing so, we tend to cancel out pre-existing differences between the two groups, such as gender, race, personality traits. One of these two groups is the experimental group: this group receives manipulation. The other is the control group: doesn’t receive manipulation.
In some research designs, one group will be randomly assigned to receive some level of independent variable, while the other will be assigned to the control condition. This is called between-subjects design. (Because experimental manipulation is made between groups.) This is where the placebo effect occurs.
In other studies, participants act as their own control group. Researchers will take a measure before independent variable manipulation, and then measure the same participant again after the independent variable manipulation. This is called within-subject design. (Because experimental manipulations are made within the same individual.)
Difference between this and random selection. Random selection deals with how we initially choose our participants, whereas random assignment deals with how we assign our participants after we’ve already chosen them. 
Manipulation of an independent variable
Independent variable is the variable that the experimenter manipulates. Dependent variable is the variable that the experimenter measures to see whether manipulation has had an effect.
Operational definition— A working definition of what we are measuring. It’s important to specify how we are measuring our variables because different researchers may define the same variables in different ways and end up with different conclusions as a result.
Confounds: a source of false conclusions
For an experiment to posses adequate internal validity ability— to draw cause-and-effect conclusions— level of independence variable must be the only difference between the experimental and control group. Confounding variable or confound, to refer to any variable that differs between the experimental and control groups other than the independent variable. If there’s some other difference between these groups, there’s no way of knowing whether the independent variable itself exerts an affect on the dependent variable.
Cause and effect
permission to infer
To decide whether to him for a cause-and-effect relations from a study.
- Using the criteria with outlined, ask yourself whether a study is an experimental.
- If it isn’t, don’t draw casual conclusions from it, no matter how tempting it might be to do so.
Pitfalls in experimental design
Placebo effect
The placebo effect— is improvement resulting from the mirror expectation of improvement. Participants who receive the drug may have improved mainly because they knew they were receiving treatment. This knowledge could have installed confidence and hope or exerted a calming influence. Placebo effect is a powerful reminder the expectations can create reality. To avoid possible affects, it’s critical that participants not know whether they’re receiving the real meds or a placebo. Patient must remain blind to the condition to which they’ve been assigned.
The Nocebo effect
Is harm resulting from the mirror expectation of harm. Practice of voodoo capitalize Nocebo effect
Experimenter expectancy effect
In some cases, the participant doesn’t know the condition assignment, but the experimenter does. It occurs when researchers hypothesis leads them to unintentionally bias the outcomes of a study. Researchers biases affect the result in subtle ways, almost always outside of their knowledge. Fall prey to confirmation bias, because of this, psychological investigators now always tried to conduct their experiments in a double blind fashion. Neither researchers nor participants no who is in the experimental or control group. 
Demand characteristics
Research participants can pick up cues, from an experiment that allows them to generate gases regarding the experimenters hypothesis. The problem is that when participants think they know how the experimenter wants them to act, they may alter their behaviour accordingly. So whether the guest right or wrong, their beliefs are preventing researchers from getting an unbiassed view of participants thoughts and behaviors. To combat this, researchers may disguise the purpose of the study, may include “distractor tasks” or “filler”items— measures unrelated to the question of interest.
Advantages and disadvantage is of research designs table
Naturalistic:
Advantages- High in external validity
Disadvantages- Low in internal validity. Doesn’t allow us to infer causation.
Case studies:
Advantages- Can provide existence proofs. Allow us to study rare or unusual phenomena. Can offer insights for later systematic testing.
Disadvantages- Are typically anecdotal. Don’t allow us to infer causation
Correlational designs:
Advantages- Can help us to predict behaviour.
Disadvantages- Don’t allow us to infer causation.
Experimental designs:
Advantages- Allows us to infer causation. High in internal validity.
Disadvantages- Can sometimes be a low in external validity
Ethical guidelines:
Informed consent
REBs Insist on a procedure called informed consent. Researchers must tell subjects what they’re getting into before asking them to participate. During informed consent process, participants can ask questions about the study and learn more about what will be involved. REBs may sometimes allow researchers to forgo at least some elements of informed consent. Deception is justified only when
(A) Researchers couldn’t perform study without deception.
(B) Use of deception or withholding the Hypothesis doesn’t negatively affect rights of participants.
(C) Research doesn’t involve medical or therapeutic Intervention
Debriefing: educating participants
REBs also request that a full debriefing be performed at the conclusion of research session. It’s a process whereby researchers in form participants what the study was about. In some cases, researchers even use it to explain the Hypothesis in non-technical language. All studies that involves use of deception or withholding require this.
How culture influences ethics
Ethical research relies on participants giving free consent based on knowing the purpose of research, balancing potential harms with potential benefits to participants, and minimizing deception. These were updated in 2010 to provide more guidance on how to conduct research in a culturally sensitive matter.
For Ex: researchers suggest that research with indigenous communities should always be grounded in local traditional teachings and community norms to ensure the indigenous world views and realities are respected.
Ethical issues in animal research
The goal of such research is to generate ideas about how the brain relates to behaviour in humans without having to inflict harm on people. Many rights protesters have raised useful concerns regarding ethical treatment of animals. In contrast, others have gone to extremes that many critics would describe as unethical in themselves. Such as ransacked labs and liberated animals, Incidentally most on both sides of animal rights debate agree liberating animals is a dreadful idea, because many or most died shortly after being released. Rigid guidelines that are in place to make certain that animals used in research are treated humanely.
In Canada researchers follow guidelines of Canadian council on animal care (CCAC). Guidelines state research involving animals must first be reviewed by animal care and use committees. These committees are made up of community members and members with expertise in animal research from institutions where research will take place. Certified veterinarians are integral parts of these committees, as they oversee the care and use of all animals used in teaching and research. Ensures that humane care is provided for the animals while they are in the institution and that there is a clear research goal that greatly outweighs any stress or harm that could come to the animal. 
Knowledge gleaned from animal research on aggression, fear, learning, memory, etc. is doubtful external validity = useless. However, some has led us to direct benefits to humans and immensely useful knowledge.
Ex: Principles of learning are derived from animal research. Without animal research we’d know relatively little about the physiology of the brain. Moreover, there are simply no good alternatives to using animals, without them will be unable to test the safety and effectiveness of many drugs.
Descriptive statistics
Describe data. Two major types of descriptive statistics.
- Central tendency, which gives us a sense of the central score in our data set, or where the group tends to cluster. 3 Measures: mean, median, mowed.
Mean (average)- Total score divided by the number of people.
Median- Is the middle score in our data set. Line scores up in order and find the middle one.
Mode- Most frequent score in our dataset. - Variability (dispersion), which gives us a sense of how loosely or tightly bunched the scores are. Simplest measure is the range difference between highest scores and lowest. Standard deviation is the average amount that an individual data point differs from the mean. Less likely to be descriptive than range because it takes into account how far each data point is from mean, rather than how widely scattered the most extreme scores are.
Inferential statistics
Allows us to determine how confident we are that we can generalize findings from our sample to the full population. We’re asking whether we can draw inferences conclusions regarding weather differences we were observed in our sample applied to similar samples. To figure out whether Differences observed in sample is believable, we need to conduct statistical tests. It’s based on probabilities that an outcome in test sample reflect reliable patterns in general population. When findings have occurred by chance less than 5 in 100 times, we say that it’s statistically significant.
Stats take into account, sample size, strength, or size of observed effect, and amount of variability in data— that is an important part of avoiding confirmation bias. Another step is called meta-analysis, this approach take sample size and effect size into account, weighing evidence from bigger studies and these with stronger affects more heavily practical. Significance- real world significance. Findings can be statistically significant yet not make much, if any, difference in the real world. Larger sample sizes are valid for increasing the likelihood that a result will be replicable— we don’t want to make a big deal out of findings that were a fluke. But by the same token, larger the sample size, greater the odds that a result will be Statistically significant. With huge sample sizes, virtually all findings— even tiny and truly unimportant ones will become statistically significant.
Evaluating accuracy of psychological reports in media
1. Consider the source. Generally place more confidence in finding reported in a reputable science magazine. Moreover, trust in findings from primary sources, such as original journal articles or secondary sources that merely report findings from primary sources.
2. Look out for excessive sharpening and leveling. Sharpening refers to Tendency to exaggerate the Gist, or central message of a study, where is levelling refers to a tendency to minimize less central ideas of a study.
Facilitated communication
Technique created to assist severely autistic or other communication impaired individuals in sharing their thoughts, feelings or ideas. Proponents of this technique, believe that these individuals are not actually mentally disabled but have a rich in her life. They claim severely autistic individuals are cognitively the same as any other typical developed person and that their disability stems from an inability to simply make the physical movements required for communication (e.g controlled movement of vocal cords or hands.)
Without research designs, even intelligent and well trained people can be fooled. Their naïve realism let them to see these children’s abuse allegations “with their own eyes” and their confirmation bias created a self-fulfilling prophecy, making them see what they wanted to see.

Another example is prefrontal lobotomy, surgeons severed the neural fibres that connect the brains frontal lobe’s to the underlying thalamus. Stunning reports of effectiveness of prefrontal lobotomy are based almost exclusively on subjective clinical reports. They didn’t conduct systematic research. They were mistaken, when perform studies effectiveness was useless. Operation certainly produced radical changes in behaviour but it didn’t target the specific behaviours. Created a host of other problems including extreme apathy. 
A key finding emerging from the past few decades of research
The same psychological processes that serve us well in most situations also predispose us to errors in thinking. That is, most mistaken thinking is cut from the same cloth as useful thinking. Do you understand how and why we can all be fooled it’s helpful to introduce the distinction between two modes of thinking. First is system 1 or intuitive, second system 2 or analytical. 
System one thinking or intuitive thinking
Pointed out that our first impressions are at times surprisingly accurate. This type of thinking is quick and reflexive and its output consist mostly of “gut hunches”. When in intuitive thinking mode, our brains are largely on auto pilot. Engage in intuitive thinking when we meet someone new and form an immediate first impression of them, or see an oncoming car rushing towards us as we were crossing the street and decide that we need to get out of the way. Without intuitive thinking we’d all be in serious trouble because much of our every day life requires snap decisions. Often involves use of heuristics (biases) 
System 2 or analytical thinking
Is slow and reflective. Takes mental effort. Engage in analytical thinking whenever we’re trying to reason through a problem or figure out a complicated concept in an introduction psychology textbook. In some cases, analytical thinking allows us to override intuitive thinking and reject our gut hunches when they seem to be wrong. You’ve engaged in this process when you’ve met someone at a party who are you initially disliked because of a negative expression on their face, only change your mind after talking to them and realizing that they’re not so bad. When we require complex habits or skills, we often start off with analytical thinking and gradually progress into intuitive thinking. 
Guiding principles for applying the scientific method to psychology
Random selection: key to generalizability
In random selection, every person in the population has an equal chance of being chosen to participate. Random selection is crucial if we want to generalize our results to the broader population. Although surveying more rather than fewer people seems like it would be more generalizable, obtaining a smaller random sample actually tends to be more accurate.
Ex: If we want to find out how the average person feels about Billie Eilish it’s better to ask 100 randomly sampled people in North America then ask 100,000 people in Nashville, Tennessee, Which is one of the worlds centres for country music. Nashville sample likely be hopelessly skewed towards country music. So, non-random selection can lead to wildly misleading conclusions. 
Ex: Non-scientific polls. Frequently, one will see polls in the news that carry the disclaimer “this is not a scientific pole”. Of course, one then has to wonder, why report the results in the first place? Why isn’t this online pole not specific. Answer: the pole isn’t scientific because it’s based on people who logged onto the website, who are probably not a representative sample of all people who watch con news— and almost certainly not all Canadians.
Guiding principles for applying the scientific method to psychology
Evaluating measures
When evaluating the results from any dependent variable or measure we need to ask two crucial questions: is our measure reliable? Is it valid?
Reliability refers to consistency of measurement. Ex: A reliable questionnaire should yield similar scores overtime. This type of reliability is called testertest reliability. Interrator reliability is the extent to which different people who conduct an interview, or make behavioural observations, agree on the characteristics they’re measuring. Validity, in contrast, is the extent to which a measure assesses what it Claims to measure.
Reliability is necessary for Validity because we need to measure something consistently before we can measure it well. Reliability doesn’t guarantee validity, although a test must be reliable to be valid, a reliable test can be completely invalid. 
Examples:
Testertest reliability- Administer a personality questionnaire to measure extroversion, to a large group of people today and re-administer it in two months. If measure is reasonably reliable, participants extroversion scores should be similar add both times.
Interrator reliability- If two psychologist who interview patients on a psychiatric hospital unit disagree on most of the diagnosis’s- one diagnoses most as having schizophrenia and the other as having depression. Then there interrator reliability will be low.
Validity- if we purchase a fancy iPhone and open it up to discover a wrist watch, we demand our money back.
Reliability for validity- Imagine Scientistsdeveloped a new measure of intelligence, distance index-middle width intelligence test (DIMWIT), Which subtracts width of index from that of our middle. Would be a highly reliable measure of intelligence because the widths of our fingers are unlikely to change much overtime and are likely to be measured similarly by different rates. But would be completely invalid a measure of intelligence because finger width has nothing to do with intelligence.