Final exam Flashcards

1
Q

bivariate correlations (ch. 8)

A

An association that involves exactly 2 variables. Also
called bivariate association. There are 3 types of associations: positive, negative, and zero. To investigate associations, researchers need to measure the 1-st variable and 2-d variable in the same group of people.
e.g. Smoking is related to more happiness – association claim (correlation).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

describing associations with categorical data

A

For the association between marital satisfaction and online dating, the dating variable is categorical; its values fall in either one category or another.
A person meets their spouse either online or offline. The other variable in this association, marital satisfaction, is quantitative; 7 means more marital satisfaction than 6, 6 means more than 5, and so on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

graphing associations with one variable as categorical

A

Figure 8.3 is a scatterplot of data in which one variable (marital satisfaction) is quantitative and the other variable (where a person met his or her spouse) is categorical. The correlation between these two variables is r = .06, which is a small correlation.

Figure 8.4: Bar Graph of Meeting Location and Marital Satisfaction
It’s much more common to plot the results of an association between one quantitative variable and one categorical variable in a bar graph (Figure 8.4). In a bar graph, each individual is not represented by a data point. Instead, it shows the group mean (average) for marital satisfaction for those who met their spouse online and the average for those who met their spouse offline. The online mean is slightly higher than the offline mean, corresponding to a weak association between the two variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Cohen’s r guidelines for evaluating strength of associations

A

Psychologists sometimes use the terms weak, moderate and strong to describe r of .1, .3, and .5, respectively.
It’s better, however, to think that effect size indicates the importance of a relationship, but our judgments also depend on the context. Even a tiny effect size can be important.
e.g. At the Olympic level, a tiny adjustment to an athlete’s form or performance might mean the difference between earning a medal and not reaching the podium at all.
The table shows Cohen’s guidelines for evaluating association strengths based on r.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Analyzing Associations When One Variable Is Categorical by t-test

A

t test: a statistic to test the difference between two group averages.
Although it is possible to calculate an r value when at least one of your variables is categorical, it’s more common to use a t-test to determine if the group means are statistically different from one another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

correlational studies support association NOT causal claims

A

When the method of the study involved measuring both variables, the study is correlational, and therefore it can support an association claim. Association claims can be graphed using scatterplots or bar graphs, and they can be made using r or t tests. However, an association claim is not supported by a particular kind of statistic or graph; it is supported by a study design— correlational research—in which all the variables are measured.
e.g. The two variables are “being a parent or not” and “level of happiness.” Association claims can be supported by correlational studies. Why can we assume the study was correlational?
We can assume this is a correlational study because parenting and level of happiness are probably measured variables (it’s not realistically possible to manipulate them). In a correlational study, all variables are measured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

interrogating association claims

A

Construct validity: How well was each variable measured?
Statistical validity: How well do the data support the conclusion?
Internal validity: Can we make a causal inference from association?
External validity: To whom can the association be generalized?
The most important validities to interrogate for an association claim are construct validity and statistical validity.
With an association claim, the two most important validities to interrogate are construct validity and statistical validity;
Although internal validity is relevant for causal claims, not association claims, you need to be able to explain why correlational studies do not establish internal validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Construct validity of an association claim

A

Ask about the construct validity of each variable.
How well was each of the variables measured?
Does the measure have good reliability?
Is it measuring what it’s intended to measure?
What is the evidence for its face validity, for its concurrent validity, and for its discriminant and convergent validity?
For example: In the Mehl study, you would ask questions about the researchers’ operationalizations of deep talk and well-being.
Recall that deep talk in this study was observed via the EAR recordings and coded later by research assistants, while well-being was measured using the subjective well-being (SWB) scale (self-report)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Statistical validity of an association claim - effect size

A

All associations are not equal; some are stronger than others. The term effect size describes the strength of a relationship (association) between two or more variables.
e.g. In Figure 8.6, both associations are positive but B has a stronger association (r is closer to 1) than A.
B depicts a stronger effect size.
We use Cohen’s guidelines for labeling effect size as small, medium, or large.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Statistical validity of an association claim - predictions

A

Strong effect sizes enable more accurate predictions. The more strongly correlated two variables are, the more accurate our predictions can be. Both the scatterplots here depict positive correlations. Which scatterplot shows the stronger relationship? Part A does, which means that we can make more accurate predictions from the data in Part A.

In other words, we can more accurately predict an individual’s score on one variable when given the score on the other variable. Conversely, we make more prediction errors as associations become weaker as in Part B. Both positive and negative associations can allow us to predict one variable when given the other variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Statistical validity of an association claim: statistical significance

A

Statistical significance-refers to the conclusion researchers make regarding how probable it is that they would get a correlation of that size by chance, assuming that there is not a correlation in the real world.
e.g. It is notable that the 95% CI for the association between sitting and MTL thickness [–.07, –.64] does not include zero. The CI for meeting one’s spouse online [.05, .07] doesn’t include zero either. In both of these cases, we can infer that the true relationship is unlikely to be zero. When the 95% CI does not include zero, it is common to say that the association is statistically significant. The definition of a statistically significant correlation is one that is unlikely to have come from a population in which the association is zero.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Logic of statistical inference (Statistical validity of an association claim)

A
  • Researchers collect data from a sample and make inferences to the population. Typically what happens is the sample mirrors what is happening in the population, but this isn’t always the case.
  • If there’s an association between two variables in the population, then there is usually an association in the sample.
  • If there’s no association between two variables in the population of interest, then there’s probably no association in the sample.
  • But sometimes even if there isn’t an association in the population, simply by chance there may be an association in the sample.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does statistically significant result mean? (Statistical validity of an association claim)

A

A probability estimate (or p value) provides information about statistical significance by evaluating the probability that the association in the sample came from a population with an association of zero. If p is very small (less than 5%), then it’s very unlikely that the result came from a zero association. Thus, a finding of p < .05 is considered to be statistically significant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does a statistically non-significant result mean? (Statistical validity of an association claim)

A

If p is relatively high (greater than .05), then the result is non-significant (not statistically significant). Therefore, we can’t rule out the possibility that the result came from a zero-association population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Outliers meaning? (Statistical validity of an association claim).

A

Outlier: an extreme score (or perhaps a few) that lies far away from the rest of the scores. The two scatterplots in Figure 8.10 are identical except for the outlier in the upper right-hand corner in the top scatterplot. The correlation coefficient for the top graph is r = .37 and for the bottom graph is r = .26. Outliers can cause problems for association claims because they may exert a large amount of influence. In bivariate correlations, outliers are most problematic when they involve extreme scores on both variables.

Outliers are most influential when the sample is small (see Figure 8.11). The two scatterplots are identical except for the outlier. Removing the outlier changes the correlation from .49 to .15, which is much bigger than the change from .37 to .26.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Restriction of range (Statistical validity of an association claim).

A

When there is not a full range of scores on one of the variables in an association in a correlational study, it can make the correlation appear smaller than it really is.
Figure 8.13: Restriction of Range Underestimates the True Correlation
SAT scores can range from 600 to 2,400, but College S only admits students who score 1800 or higher (restriction of range in Figure 8.13A). Thus, the range is restricted to 1,800–2,400. We can see what the scatterplot would look like if the range was not restricted (see Figure 8.13B). In the top scatterplot, r = .33. In the bottom scatterplot, where the range is not restricted, the correlation is stronger (r = .57). What can researchers do about restriction of range? There are statistical techniques that allow correction for restriction of range, or researchers can recruit more participants to try to widen the range.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Curvilinear association (Statistical validity of an association claim).

A

one in which the correlation coefficient is zero (or close to zero), and the relationship between two variables isn’t a straight line.
e.g. As people’s age increases, their use of the health care system decreases, but as they approach 60 years of age and beyond, health care use increases again. The correlation coefficient is r = .01, which doesn’t adequately capture the curvilinear nature of the relationship. However, the scatterplot can inform us about curvilinearity in cases where the correlation coefficient suggests that there is no correlation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Internal validity of an association claim

A

Applying the three causal criteria:
1. Covariance of cause and effect. The results must show a correlation, or association, between the cause variable and the effect variable.
2. Temporal precedence. The method must ensure that the cause variable preceded the effect variable; it must come first in time.
3. Internal validity. There must be no plausible alternative explanations for the relationship between the two variables.
More on internal validity: When is the potential third variable a problem?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

External validity of an association claim?
- How important is it?

A

Does the association generalize to other people, places, and times? It is important to note that the size of the sample does not matter as much as the way the sample was selected from the population of interest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What’s moderator (moderating variables)?

A

When the relationship between two variables changes depending on the level of another variable, that other variable is called a moderator.

Example: Let’s consider a study on the correlation between professional sports games attendance and the success of the team. Using data gathered over many major league baseball seasons, Oishi and his team determined that in cities with high residential mobility, there is a positive correlation between success and attendance; that pattern shows that people are more likely to attend games there when the team is having a winning season. In cities with low residential mobility, there is not a significant correlation between success and attendance; that pattern shows that Pittsburgh Pirates fans attend games regardless of how winning the season is. We say that the degree of residential mobility moderates the relationship between success and attendance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Simple experiments (ch. 10)

A

Let’s begin with two examples of experiments that supported valid causal claims.
Example 1: Taking Notes
Having selected five different TED talks on interesting topics, the researchers showed one of the lectures on a video screen. They told the students to take notes on the lectures using their assigned method. After the lecture, students spent 30 minutes doing another activity meant to distract them. Then they were tested on what they had learned from the TED talk.
The results Mueller and Oppenheimer obtained are shown in Figure 10.2. Students in both the laptop and the longhand groups scored about equally on the factual questions, but the longhand group scored higher on the conceptual questions.
Example 2: Eating Pasta
Some researchers at Cornell University conducted an experiment to see if serving bowl size has an effect on portion size. Participants were randomly assigned to either the “large bowl” or “medium bowl” condition.
Each participant’s plate was weighed before he or she ate the pasta and afterward to determine the amount of pasta consumed. The graph on the left shows that participants took more pasta from the large serving bowl than from the medium one and they consumed about 140 calories more.
The researchers concluded that the size of the serving bowl influenced how much pasta people served themselves and how much they ate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

independent vs dependent variable?

A

Independent variable- Manipulated variable: researcher assigns participants to a particular level of the variable; example: note-taking method (levels: computer, longhand); example: note-taking method was the IV in the academic achievement study.

Dependent variable- Measured variable (outcome): research records what happens in terms of behavior of attitudes based on self-report, behavioral observations, or physiological measures; example: number of anagrams solved correctly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

control variables?

A

any variable that an experimenter holds constant.
Besides the independent variable, researchers also control potential third variables in their studies by holding all other factors constant between the levels of the independent variable.
Example: in the pasta study, all participants ate
the same kind of pasta from the same type and
size of plate.
Control variables are therefore important for establishing internal validity.

24
Q

Experiments Establish Covariance

A
  • The results of the experiment by Leonard and her colleagues did show covariance between the causal variable (the independent variable: model’s behavior) and the outcome variable (the dependent variable: button presses). On average, babies who saw the “effort” model pressed the button 11 times more often than babies who saw the “no-effort” model. In this case, covariance is indicated by a difference in the group means: The number of button presses was different in the effort condition than in the no-effort condition. The notetaking study’s results also showed covariance, at least for conceptual questions: Longhand notetakers had higher scores than laptop notetakers.
  • Independent variables answer the question, “Compared to what?”
    For example, a few years ago, a psychology blogger described a study he had conducted informally, concluding that dogs don’t like being hugged. However, this study did not have a comparison group (comparison condition): Coren did not collect photos of dogs being hugged. Therefore, we can’t know, based on this study, if signs of stress are actually higher in hugged dogs than not-hugged dogs.
  • Covariance: it is also about the results.
    Manipulating the independent (causal) variable is necessary for establishing covariance, but the results matter, too. Suppose the baby researchers had found no difference in how babies behaved in the two conditions. In that case, the study would have found no covariance.
25
Q

control group vs treatment group vs placebo group

A

A control group is a level of an independent variable that is intended to represent “no treatment” or a neutral condition. When a study has a control group, the other levels of the independent variable are usually called the treatment group(s). For example, if an experiment is testing the effectiveness of a new medication, the researchers might assign some participants to take the medication (the treatment group) and other participants to take an inert sugar pill (the control group). When the control group is exposed to an inert treatment such as a sugar pill, it is called a placebo group (or a placebo control group).
- Not every experiment has or needs a control group, and often, a clear control group does not even exist. The Mueller and Oppenheimer notetaking study (2014) had two comparison groups— laptop and longhand—but neither was a control group, in the sense that neither of them clearly established a “no notetaking” condition.

26
Q

confounds-design confounds

A

when a second variable varies systematically along with the IV and provides an alternative explanation for the results.
e.g. Consider the study on note taking. If all of the students in the laptop group had to answer more difficult essay questions than those in the longhand group, that would be a design confound. We would not know whether the difference in conceptual performance was caused by the question difficulty or the notetaking method. If a design confound is present, it threatens internal validity and we can’t support a causal claim in that case.
Correction: control variables

27
Q

systematic variability vs unsystematic variability design confounds (ask)

A

Systematic variability- It is important to note that internal validity is only threatened if there is systematic variability with the IV. Example: If those participants in the large dish group had higher quality pasta then those in the medium dish group, then that’s systematic variability and a design confound.
Unsystematic variability is random or haphazard and affects both groups. Unsystematic variability is not the same as a confound. Some babies like music more than others, some babies can sit still longer than others, and some babies just had a nap while others are tired. But individual differences don’t become a confound unless one type of baby ends up in one group systematically more than another group.

28
Q

Selection effects

A

In an experiment, when the kinds of participants in one level of the independent variable are systematically different from those in the other, selection effects can result. They can also happen when the experimenters let participants choose (select) which group they want to be in. A selection effect may result if the experimenters assign one type of person (e.g., all the women, or
all who sign up early in the semester) to one condition, and another type of person (e.g., all the men, or all those who wait until later in the semester) to another condition.
- Independent-groups designs only.

29
Q

random assignment?

A

Random assignment is a way of assigning participants to levels of the IV such that each participant has an equal chance of being in each group.
There should be no systematic difference between groups with random assignment.
- Avoiding selection effects with random assignment. Assigning participants at random to different levels of the independent variable—by flipping a coin, rolling a die, or using a random number generator—controls for all sorts of potential selection effects. Of course, random assignment may not always create numbers that are perfectly even. The 20 exceptionally focused babies may be distributed as 8 and 12 rather than exactly 10 and 10.
- It creates a situation in which the experimental groups
will become virtually equal, on average, BEFORE the independent variable is applied.

30
Q

matched groups?

A

Random assignment doesn’t always work well if your sample size is small, as groups may be imbalanced.
Some researchers prefer to use matched groups with small samples and they may wish to be absolutely sure the experimental groups are as equal as possible before they administer the independent variable.
Matching involves matching groups on some variable (e.g., IQ).
We randomly assign the three participants with the highest IQs to the three groups, then assign the next three highest participants, and so on. They would continue this process until they reach the participants with the lowest IQ and assign them at random too.

31
Q

practice effect vs carryover of order effect

A
  • One type of order effect is a practice effect, which occurs when participants either get better at a task from practice or get worse at a task due to fatigue (called a fatigue effect).
  • Another type of order effect is a carryover effect. This occurs when there is contamination carrying over from one condition to the next.
    Example: You drink caffeinated coffee and then take a test. Then you drink decaf coffee and take a test. However, the caffeinated coffee is still having an effect on you on the second test
32
Q

order effect?

A
  • Participants’ later responses are systematically affected by their earlier ones (fatigue, practice, or contrast effects).
  • Within-groups designs only
    Within-groups designs have the potential for a particular threat to internal validity: Sometimes, being exposed to one condition first changes how participants react to the later condition. Such responses are called order effects, and they happen when exposure
    to one level of the independent variable influences responses to the next level.
  • An order effect in a within-groups design is a confound, meaning that behavior at later levels of the independent variable might be caused not by the experimental manipulation but rather by the sequence in which the conditions were experienced.
33
Q

between-subjects design (independent groups design)?

A

different groups of participants placed into different levels of the independent variable;
- example: each participant was randomly assigned to either the large or medium serving bowl condition. In the notetaking study, some participants took notes on laptops and others took notes in longhand.
- Two types: posttest-only and pretest/posttest.

34
Q

posttest-only design

A

type of independent-groups experiment in which participants are randomly assigned to independent variable groups and are tested on the dependent variable just once.
Posttest-only designs satisfy all three criteria for causation (ask)
- e.g. The notetaking study is an example of a posttest-only design, with two independent variable levels. Participants were randomly assigned to a laptop condition or a longhand condition, and they were tested only once on the video they watched.

35
Q

pretest/posttest design

A

participants are randomly assigned to at least two different groups and are tested on the key dependent variable twice—once before and once after exposure to the independent variable.
Example: A study on the effects of mindfulness training. In this study, 48 students were randomly assigned to participate in either a 2-week mindfulness class or a 2-week nutrition class.
- One week before starting their respective classes, all students completed a verbal-reasoning section of a Graduate Record Examinations (GRE) test. One week after their classes ended, all students completed another verbal-reasoning GRE test of the same difficulty.
- While the nutrition group did not improve significantly from pretest to posttest, the mindfulness group scored significantly higher at posttest than at pretest.

36
Q

within-groups designs

A

each participant is presented with all levels of the IV;
- example: if you conducted a notetaking study and each participant engaged in both longhand and laptop notetaking.

37
Q

repeated measures design

A

a type of within-groups design in which participants are measured on the DV more than once (after exposure to each level of the IV).
Ex: Investigate whether a shared experience would be intensified even when people do not interact with the other person. In this study, the independent variable had two levels: sharing and not sharing an experience. Participants experienced both levels, making it a within-groups design. The dependent variable was participants’ rating of the chocolate. It was a repeated- measures design because each participant rated the chocolate twice (i.e., repeatedly).
- Each participant was joined by a female confederate. The two sat side-by-side, facing forward, and never spoke to each other.
- The participant was told that the two chocolates were different, but in fact they were exactly the same.
After tasting each chocolate, participants rated how much they liked it. The results showed that people liked the chocolate more when the confederate was also tasting it.

38
Q

concurrent measures design

A

participants are exposed to all levels of the IV at roughly the same time, and a single preference is the DV.
- e.g. Each infant viewed photos of men and women at the same time, and an experimenter recorded which face they looked at the longest (both levels of IV). This study found that babies show a preference for looking at female faces, unless their primary caretaker is male.

39
Q

advantages of within group design

A
  1. Participants in your groups are equivalent because they are the same participants and serve as their own controls.
    e.g. Some people really like dark chocolate, and others do not. But in a repeated-measures design, people bring their same level of affection for chocolate to both conditions, so their individual liking for the chocolate stays the same. The only difference between the two conditions will be attributable to the independent variable (whether people were sharing the experience with the confederate or not).
    - The idea of “treating each participant as his or her own control” also means matched-groups designs can be treated as within-groups designs.
  2. These designs give researchers more power to notice differences between conditions. Statistically speaking, when extraneous differences (unsystematic variability) in personality, food preferences, gender, ability, and so on are held constant across all conditions, researchers can estimate the effect of the independent variable manipulation more precisely. (ask)
  3. Within-groups designs require fewer participants than other designs.
    e.g. Suppose a team of researchers is running a study with two conditions. If they want 50 participants in each condition, they will need a total of 100 people for an independent-groups design. But only 50 participants for the within-groups designs.
40
Q

disadvantages of within group design?

A
  1. Potential for order effects, which can threaten internal validity.
    — Solution: counterbalancing
  2. Might not be practical or possible
    Ex: Learning to ride a bike
  3. Experiencing all levels of the independent variable (IV) changes the way participants act (demand characteristics).
    E.g. Imagine a study that asks people to rate the attractiveness of two photographed people—one Black and one White. Participants in such a study might think, “I know I’m participating in a study at the moment; seeing both a White and a Black person makes me wonder whether it has something to do with prejudice.”
    Demand characteristics (aka experimental demand) occur when participants pick up on cues that lead them to guess the experiment’s hypothesis.
41
Q

What’s power? (ask)

A

the ability of a study to show a statistically significant result when an IV truly has an effect on a DV.
- A within-groups design, a strong manipulation, a larger number of participants, and less situation noise are all things that can improve the precision of our estimates. Of these, the easiest way to increase precision and power is to add more participants.

42
Q

Counterbalancing?

A

In a repeated-measures experiment, presenting the levels of the independent variable to participants in different sequences (orders) to control for order effects.
How do the experimenters decide which participants receive the first order of presentation and which ones receive the second? Through random assignment, of
course! They might recruit, say, 50 participants to a study and randomly assign 25 of them to receive the order A then B, and vice versa

43
Q

Types of counterbalancing?

A

Two types of counterbalancing :
- Full counterbalancing occurs when all possible condition orders are presented; example: with two conditions, there are two orders; with three conditions, there are six orders. When a within-groups experiment has only two or three levels of an independent variable.
- Partial counterbalancing occurs when only some of the possible condition orders are used; For example, a study with four conditions requires 24 possible sequences; a researcher could present a randomized order for each participant (by computer) or a Latin square (each condition appears in each position at least once).

44
Q

Threats to internal validity and ways to correct for each threat

A
  • Potential threat 1
    Design confounds: Another variable accidentally varies systematically along with the IV.
    Correction: control variables.
  • Potential threat 2
    Selection effects: Systematically different types of participants are in the two groups.
    Independent-groups designs only.
    Correction: random assignment or matching.
  • Potential threat 3
    Order effects: Participants’ later responses are systematically affected by their earlier ones (fatigue, practice, or contrast effects).
    Within-groups designs only.
    Correction: counterbalancing.
45
Q

Interrogating Causal Claims with the Four Validities

A
  • Construct validity: How well were the variables measured and manipulated?
  • External validity: To whom or what can the causal claim generalize?
  • Statistical validity: How well do the data support the causal claim?
  • Internal validity: Are there alternative explanations for the outcome?
46
Q

Construct validity of causal claim

A
  1. Dependent variables: How well were they measured?
    DV: To interrogate construct validity in the notetaking study, you would start by asking how well the researchers measured their dependent variables: factual knowledge and conceptual knowledge.
    (Mueller and Oppenheim (2014) provided examples of the factual and conceptual questions they used, so you could examine them and evaluate if they actually do constitute good measures of factual learning (e.g., “What is the purpose of adding calcium propionate to bread?”) and conceptual learning (e.g., “If a person’s epiglottis was not working properly, what would be likely to happen?”)).
  2. Independent variables: How well were they manipulated?
    IV: To interrogate the construct validity of the independent variables, you would ask how well the researchers manipulated (or operationalized) them. In the Mueller and Oppenheimer study, this was straightforward: People were given either a pen or a laptop. This operationalization clearly manipulated the intended independent variable.
47
Q

External validity of causal claim

A
  • Generalizing to other people?
    Remember that when you interrogate external validity, you ask about random sampling - randomly gathering a sample from a population. (In contrast, when you interrogate internal validity, you ask about random assignment - randomly assigning each participant in a sample into one experimental group or another).
    Generalizing to other situations?
    For example, the notetaking study used five videotaped TED talk lectures. In their published article, Mueller and Oppenheimer (2014) reported two additional experiments, each of which used new video lectures. All three experiments found the same pattern, so you can infer that the effect of laptop notetaking does generalize to other TED talks. However, you can’t be sure from this study whether laptop notetaking would generalize to a live lecture class. To decide whether an experiment’s results can generalize to other situations, we need to conduct more research.
48
Q

What if external validity is poor? (ask)

A

Remember that in an experiment, the validity that is emphasized most is internal validity (i.e., experimental control). In order to achieve experimental control, researchers sometimes conduct their studies in artificial laboratory environments that may not represent the real world. Many experiments sacrifice real-world representativeness in exchange for internal validity.
Testing their theory and teasing out the causal variable from potential confounds were the steps Mueller and Oppenheimer, like most experimenters, took care of first. In addition, running an experiment on a relatively homogenous sample (such as college students) meant that the unsystematic variability was less likely to obscure the effect of the independent variable. Replicating the study using several samples in a variety of contexts is a step saved for later.

49
Q

Statistical validity of causal claim

A
  • Is the difference statistically significant?
    How large is the effect? Effect size can help determine covariance. Typically, the larger the effect size, the stronger is the causal effect. In Mueller and Oppenheimer’s studies, the original units for the dependent variable were the number of points people scored correctly. Participants were tested on both factual and conceptual questions, but we’ll focus on the conceptual questions here. People in the longhand condition earned an average of 4.29 points on the conceptual questions, compared with 3.77 in the laptop condition. Therefore, the effect size in original units is 0.52 points of improvement.
50
Q

What’s standardized effect size? Statistical validity of causal claim.

A
  • In experiments, researchers use an indicator of standardized effect size called d. This standardized effect size (d) takes into account both the difference between means and the spread of scores within each group (the standard deviation). When d is large, it means the independent variable caused a large change in the dependent variable, relative to how spread out the scores are. When d is small, it means the scores of participants in the two experimental groups overlap more. The effect size for the difference in conceptual test performance between the longhand and laptop groups was = 0.38. This means the laptop group scored 0.38 of a standard deviation higher than the longhand group. Psychologists sometimes start by saying a d of 0.2 should be considered small, a d of 0.5 is moderate, and a d of 0.8 is large.
51
Q

Internal validity of causal claim

A

When you are interrogating causal claims, keep in mind that internal validity is often the priority. Experimenters isolate and manipulate a key causal variable, while controlling for all other possible variables, precisely so they can achieve internal validity. If the internal validity of an experiment is sound, you know that a causal claim is almost certainly appropriate. But if there is some confound, a causal claim would be inappropriate.
These fundamental internal validity questions are worth asking of any experiment:
1. Did the experimental design ensure that there were no design confounds, or did some other variable accidentally covary along with the intended independent variable? (Mueller and Oppenheimer made sure people in both groups saw the same video lectures, were in the same room, and so on.)
2. If the experimenters used an independent-groups design, did they control for selection effects by using random assignment or matching? (Random assignment controlled for selection effects in the notetaking study.)
3. If the experimenters used a within-groups design, did they control for order effects by counterbalancing (Counterbalancing is not relevant in Mueller and Oppenheimer’s design because it was an independent-groups design).

52
Q

internal vs external locus of control (Hock readings)

A

When people interpret the consequences of their behavior to be controlled by luck, fate, or powerful others- external locus of control (locus meaning location).
If people interpret their own choices and personality as responsible for their behavioral consequences, they believe in an internal locus of control.

53
Q

Implication of findings from Rotter’s study? (ask)

A

Can Rotter use his findings to predict how people will act in the future based on their performance?
Yes:
Significant correlations between I-E scores (I= Internal, E = External) and people’s behavior in many diverse situations, such as:
- gambling: externals would tend to engage in more unusual shifts in betting, called the “gambler’s fallacy” (such as betting more on a number that has not come up for a while on the basis that it is “due,” when the true odds of it occurring are unchanged).
- persuasion: The internals were found to be significantly more successful than externals in altering the attitudes of others. Conversely, other studies demonstrated that internals were more resistant to manipulation of their attitudes by others.
- smoking: An internal locus of control appeared to relate to self-control as well. Two studies discussed by Rotter found that (a) smokers tended to be significantly more external than nonsmokers and (b) individuals who were able to quit smoking were more internally oriented, even though both internals and externals believed the warning was true.
- achievement motivation: Each of the achievement-oriented factors was more likely to be found in those students who demonstrated an internal locus of control.
- conformity: Participants were allowed to bet (with money provided by the experimenters) on the correctness of their judgments. Those found to be internals conformed significantly less to the majority opinion and bet more money on themselves when making contrary judgments than did the externals.

54
Q

Wolpe- main points about how phobia is developed

A

The fundamental idea of behavioral therapy is that you have learned an ineffective behavior (the phobia), and now you must unlearn it.
This incompatibility of two responses is called reciprocal inhibition (when two responses inhibit each other, only one may exist at a given moment).

55
Q

Findings from Wolpe

A
  • Wolpe believed that the reason you have a phobia is that you learned it sometime in your life through the process of classical conditioning, by which some object became associated in your brain with intense fear.
  • Treatment for anxiety disorders (especially phobias) for children and adults continue to reflect Wolpe’s original idea.
  • Wople’s findings have been replicated.
56
Q

Wolpe relaxation training vs anxiety hierarchy

A

Relaxation training:
- The process involves tensing and relaxing various groups of muscles (such as the arms, hands, face, back, stomach, legs, etc.) throughout the body until a deep state of relaxation is achieved.
- Goal is for person to be able to put self in a deep state of relaxation.
Construction of anxiety hierarchy:
- The list would begin with a situation that is only slightly uncomfortable and proceed through increasingly frightening scenes until reaching the most anxiety-producing event you can imagine.

57
Q

Wolpe - systematic desensitization (ask)

A

Systematic desensitization - means decreasing your level of anxiety or fear gently and gradually.