QUIZZES after midterm2 Flashcards

1
Q

Q1. A researcher is conducting post-hoc tests for a one-way between-groups ANOVA. They have three different conditions with six people in each condition. Which of the following differences between means would be considered statistically different at alpha = .05?

4.17

3.69

2.85

both a) and b)

A

d)both

We will need to find our critical q value for this study in Table B-5.

The columns use the number of treatments or conditions (in this case 3 - don’t use conditions - 1 here - the table just says number of treatments.

The row uses the degrees of freedom within, in this case we take N - 1 for each group and add them up: 5 + 5 + 5 = 15.

Alpha of .05 is the non-bolded row.

Here we see the critical value is 3.67.

Now we need to make our comparisons. Both 4.17 AND 3.69 are more extreme than 3.67. 2.85 is not. Consequently, the answer is both a) and b).

LO9: Complete Tukey HSD post-hoc tests for a one-way between-groups ANOVA, and recognize how Bonferroni post-hoc tests are completed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Q2. A researcher is interested in how different instructions affect reading comprehension. She asks college students to read a text on physics and randomly assigns them to one of three groups: (1) finish the reading as quickly as possible, (2) read at a normal pace, or (3) read at a normal pace, but pause after each paragraph for reflection. Within each group, students differ in how much they remember from the text. These differences reflect:

a) the within-group variance

b) the between-group variance

c) individual differences in reading ability

d) individual differences in following instructions

A

A)

The within-group variance looks at the differences between the performance within each group, due to reading speed, reading ability, attention, comprehension skills. It’s a measure of how much ‘random error’ there is simply because people differ and will fluctuate in performance day to day, etc.

LO3: Identify and describe what the F statistic measures and how the F distributions are related to the t distributions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Q3. You would like to explore the effect of temperature on memory. You will have three different conditions: moderate temperature (20 degrees Celsius), low temperature (10 degrees Celsius) and high temperature (30 degrees Celsius). You are particularly concerned that individual differences in memory and temperature performance will affect the results. What statistical test should you use to test your hypothesis?

a) within-groups ANOVA

b) paired-samples t test

c) independent-samples t test

d) between-groups ANOVA

A

A)

In this case, since you are worried about participant differences, and there are more than 2 conditions, you should do a within-groups ANOVA. (Note that you would have to counterbalance the conditions across participants.)

LO1: Recognize situations in which a one-way between-groups ANOVA and one-way within-groups ANOVA is used to analyze data

LO10: Recognize how a one-way within-groups ANOVA allows us to remove error variability from our data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Q4.

a) 0.59

b) 0.69

c) 0.41

d) 1.46

A

see below

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Q5. Consider this fictional scenario: A researcher uses data collected from the Canadian census to determine whether IQ score is affected by level of education (measured as high school, college/university, and post-graduate degree). They find the following data:

high school: M = 97 (s = 3)

college/university: M = 101 (s = 2)

post-graduate degree: M = 106 (s = 20)

Which of the following assumptions are not met?

A) The underlying population distribution is normal

B) Homoscedasticity (homogeneity of variances)

C) None of the assumptions is met

D) The participants are randomly selected from the population

A

B) HOMOSCEDASTICITY

  • The Canadian census uses a random sample of Canadians, so assumption 1 is met.
  • IQ is normally distributed in the population, so assumption 2 is met.
  • The problem here is the variability in each group. The post-graduate degree group has a much, much higher standard deviation. Consequently, the variances are very different (and so homoscedasticity is not met).

Homoscedasticity means that the variances of the dependent variable (IQ scores in this case) are approximately equal across all levels of the independent variable (level of education - high school, college/university, and post-graduate degree) – and they aren’t.

LO5: Accurately use the language of ANOVAs and identify the three assumptions of the ANOVA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Q6. A researcher reports the following statistics in their paper:

F(2,15) = 4.32, p<0.05, w^2= 0.36

If this finding is in error the researcher has made what type of error?

A) Type II error

B) sampling bias

C) statistical error

D) Type I error

A

D) TYPE 1 error

In the given scenario, the p-value (p < 0.05) indicates that the result is statistically significant at the 0.05 level (it is less than 0.05)

However, the effect size w^2 is 0.36, which represents a relatively large effect size. An effect size of 0.36 suggests that there is a substantial difference between groups.

Type 1 error (False Positive): This occurs when the researcher incorrectly rejects a true null hypothesis. In other words, the researcher concludes that there is a significant effect (i.e., rejects the null hypothesis which states there is no difference) when, in reality, there is no effect in the population.

LO7: Conduct all 6 steps of a one-way between-groups ANOVA and interpret the results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Q7.

A

ANSWER

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Q8. A researcher conducts three statistical tests using an alpha rate of 0.05. By completing multiple tests without any correction, he is:

A) HARKing

B) increasing his F ratio

C) increases his Type I error rate

D) increasing his Type II error rate

A

Conducting multiple tests means you will be more likely to find something just by chance (a Type I error). Remember, we accept that 1 out of every 20 findings will be a Type I error. Consequently, the more tests we run, the more likely we are to hit that 1 in 20. That’s why we need to make statistical corrections when we have more than two groups.

This is known as the problem of multiple comparisons or multiple testing. As more tests are performed, the likelihood of obtaining a significant result purely by chance (Type I error) increases. To control the overall Type I error rate, researchers can use various correction methods to adjust the significance threshold for each individual test, thus maintaining the desired level of alpha across all the tests. Failure to correct for multiple tests can lead to an inflated Type I error rate and an increased risk of false positives.

LO2: Recognize and explain why we cannot perform multiple t tests when comparing three or more groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Q9. A researcher conducts a study on WEIRD participants. She makes a note in her paper stating, “I expect my results to generalize to any sample of undergraduate students in Canada.” This is known as a(n) _______________ statement.

A) sampling methodology

B) constraint on generality

C) population of interest

D) WEIRD bias

A

B) CONSTRAINT ON GENERALITY

The researcher is acknowledging that the sample used in the study consists of WEIRD participants (Western, Educated, Industrialized, Rich, and Democratic), which is not representative of the broader population. By making this note, the researcher is recognizing that the findings may not apply to individuals outside the WEIRD population and that the generalizability of the results is limited to Canadian undergraduate students,

A constraint on generality (COG) statement is used to help researchers explain who they believe their research can be generalized to. This can help us from over-generalizing our results.

LO11: Recognize critiques of ‘WEIRD’ samples in psychology and actions researchers are taking to improve how our research is generalized

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Q 10. A researcher is conducting a one-way between-groups ANOVA. He has four experimental conditions with 18 participants in each of the conditions. What is the critical value for alpha = .01?

A) 2.74

B) 2.75

C) 3.62

D) 4.10

A

D) 4.10

To find our critical value, we go to Table B-3. We have:

dfbetween = 3

dfwithin = 17 + 17 + 17 +17 = 68

We look at the table with 3 for our columns and round down to 65 for the row. With alpha of .01, we find the value of 4.10 (the bolded one).

LO4: Use the F table to determine a critical value

LO7: Conduct all 6 steps of a one-way between-groups ANOVA and interpret the results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

QUIZ:

Q1. If a person had two independent variables with three levels each, and different participants were in each group, what type of ANOVA would they need to conduct?

a) One-Way Within-Groups ANOVA

b) Two-Way Within-Groups ANOVA

c) Two-Way Between-Groups ANOVA

d) One-Way Between-Groups ANOVA

A

C) 2-WAY Between-Groups ANOVA

  • Since we have two levels of the independent variable, we need to call our study a Two-Way. Since different participants are in different groups, we call it Between-Group.
  • The “Between-Groups” aspect refers to the fact that the participants are assigned to specific groups based on the levels of the independent variables. The “Two-Way” aspect indicates that there are two independent variables being considered simultaneously.

See also The Language and Assumptions of ANOVA in Chapter 12

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Q2. To find the effect size for an ANOVA, we calculate:

a) Cohen’s d

b) R2

c) the p value

d) Tukey’ SD

A

B) R^2

  • R^2 is a common measure of effect size for ANOVA.
  • R2 represents the proportion of variance in the dependent variable that is accounted for by the independent variables in the ANOVA model. It measures the strength of the relationship between the independent variables and the dependent variable.
  • R2 ranges from 0 to 1, with higher values indicating a larger effect size and a stronger relationship between the independent variables and the dependent variable.

*WHY NOT A),C), D)? Cohen’s d is a measure of effect size commonly used in the context of comparing means between two groups, not ANOVA. The p-value represents the statistical significance of the results, indicating the probability of obtaining the observed effect or a more extreme effect under the null hypothesis. Tukey’s HSD (honestly significant difference) is a post-hoc test used to determine which specific group means are significantly different from each other after conducting an ANOVA

Testing for the One-Way Between-Groups ANOVA in Chapter 12.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Q3. It is advantageous to use a within-groups ANOVA compared to a between-groups ANOVA because a within-groups ANOVA:

a) has greater variance of scores

b) is more likely to include more research participants in the study

c) is more likely to discover a research result that has cause-effect implications

d) reduces error since the same participants contribute to each condition

A

d) reduces error since the same participants contribute to each condition

By using the same participants in each group, we can reduce some of the error variability in our study (that based on our participants’ natural tendencies).

See also Chapter 13: The Benefits of Within-Groups ANOVA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Q4. In a one-way between-group analysis of variance, how does the magnitude of the mean differences from one condition to another contribute to the F-ratio?

a) The mean differences contribute to the denominator of the F-ratio

b) The mean differences contribute to both the numerator and denominator of the F-ratio

c) The mean differences contribute to the numerator of the F-ratio

d) The sample mean differences do not influence the F-ratio

A

c) The mean differences contribute to the numerator of the F-ratio

  • The F-ratio explores whether the differences between groups is higher than the differences within the groups.
  • If the ratio is 1, the differences between groups is no larger than those within.
  • However, if it is larger than 1, the differences between groups is larger than those within. Thus, the differences between the groups is in the numerator.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Q6. When comparing three or more groups, we use ANOVA because conducting multiple t tests would result in an increased likelihood of a:

a) Type III error

b) All of these are correct.

c) Type I error

d) Type II error

A

C) TYPE 1 ERROR

  • If we do multiple tests, we increase the chance of finding a significant difference by random chance. Thus, we increase the probability of a Type I error. See also Type I
  • Errors When Making Three or More Comparisons in Chapter 12.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Q7. One of the assumptions for ANOVA is that the variances between groups are equal; we refer to this as:

a) standard error of the mean

b) pooled variance

c) homoscedasticity

d) heteroscedasticity

A

C) HOMOSCEDASTICITY

17
Q

Q8 . The graphs below provides 3 different distributions. Based on a visual inspection of these distributions, which would give you the best outcome for an ANOVA?

A

C)

  • In this case, c is the most likely to show a significant ANOVA. Each group’s distribution is thin, showing little within-group variability. However, the differences between groups is large, showing a lot of between-group variability. This should give us a large F statistic.
  • See also the section on The logic and Calculations of the F Statistic in Chapter 12.
18
Q

Q9 A _____ is a statistical procedure frequently carried out after we reject the null hypothesis in an analysis of variance. It allows us to make multiple comparisons among several means.

a) nonparametric test

b) parametric test

c) post hoc test

d) planned comparison.

A

C) POST HOC TEST

  • like an ‘after-the-fact’ test to look at differences between groups
  • 2 post hoc tests are 1. Tukey’s HSD and 2. Bonferroni test
19
Q

Q10 . One difference between the one-way between-groups ANOVA and the one-way within-groups ANOVA is that with the one-way within-groups ANOVA:

a) you are likely to have more participants in your study

b) you have to calculate the sum of squares

c) you have to be concerned about order effects

d) you have to randomly assign participants to groups

A

C) be concerned about order effects

  • Since each of our participants is in each condition of our study, we again need to worry about order effects.
  • Order effects were discussed in Chapter 10.
  • In a one-way within-groups ANOVA, also known as a repeated measures ANOVA, participants are exposed to all levels or conditions of the independent variable. This means that each participant experiences all conditions, typically in a different order. The order in which the conditions are presented can introduce order effects, such as learning, fatigue, or carryover effects, which may impact participants’ responses.
20
Q

WEEK 12 QUIZ:

Q1. What type of graph is particularly useful for displaying a correlation?

histogram

scatterplot

bar graph

pie chart

A

scatterplot

  • Scatterplots are excellent for visually depicting the relationship between two variables and identifying the nature of the correlation (positive, negative, or no correlation). If the points on the scatterplot form a recognizable pattern or trend, it indicates the strength and direction of the correlation between the two variables. For example, a positive correlation shows an upward trend, while a negative correlation shows a downward trend
21
Q

Q2 . You are given a test of your processing speed three different days. On day 1, you score 89. On day 2, you score 31. On day 3, you score 56. This suggests that the test is:

A) neither reliable or valid

B) both reliable and valid

C) reliable

D) valid

A

A) neither reliable or valid

  • Reliability: The large fluctuations in scores between the days suggest that the test might not be reliable. If the test were highly reliable, we would expect the scores to be more consistent across the three measurement occasions.
  • Validity: we would need additional information about the test’s ability to measure what it intends to measure.
22
Q

Q3. Dr. McMann conducts a research study and finds that, on average, people with more education earn more money throughout their lifetimes than those with less education. What should Dr. McMann conclude about the two variables—education level and lifetime income?

A) They are negatively correlated

B) There is a significant difference between the two variables

C) They are positively correlated

D) There is no correlation between the variables

A

C) They are positively correlated

23
Q

Q4 . A correlation coefficient is a statistic that:

A) quantifies the number of independent variables in an experiment

B) tells us whether there is a significant difference between two variables

C) quantifies the relation between variables

D) tells us how much variance there is in a distribution

A

C) quantifies the relation between the variables.

  • It measures the strength and direction of the linear association between TWO SCALE variables.
  • it takes values between -1 and +1
  • positive correlation = positive linear relationship
  • negative correlation # = neg. linear relationship
  • 0 means no linear relationship between the variables. Not that there is NO relationship, just that there is no LINEAR relationship
24
Q

Q5. A psychologist is interested in whether working memory is influenced by sleep loss. The psychologist administers a measure of working memory to a group of subjects at 8 A.M. on Day One of the study and then again at 8 A.M. on Day Two of the study, after keeping the subjects awake the entire night. Does sleep loss affect working memory? What is the statistical analysis we would perform to answer this question?

A) z test

B) paired-sample t test

C) single-sample t-test

D) independent-sample t test

A

B) PAIRED SAMPLES TEST

  • The paired-sample t test, also known as a dependent-sample t test, is used to compare the means of two related groups, where each participant is measured twice under different conditions
25
Q

Q6.

A) SMALL
B) LARGE
C) IMPOSSIBLE TO DETERMINE
D) MEDIUM

A

LARGE

  • The closer the values are to a straight line, the closer they are to a perfect correlation. These dots all cluster close to a line drawn down the middle, indicating that this is a large negative correlation).
26
Q

Q7.When a researcher obtains a significant correlation between two variables in a study that she has conducted, it is appropriate to draw all of the following types of conclusions EXCEPT:

A) how strong the relationship is between the two variables

B) what the direction of the relationship is between the two variables

C) whether there is a cause-effect relationship between the two variables

D) that there is a relationship between the two variables

A

C) whether there is a cause-effect relationship between the two variables

27
Q

Q8. Which of the following correlations reflects the strongest relationship between variables?

A) 0.49

B) 1.30

C) 0.03

D) -0.64

A

D) -0.64

  • 1.30 is impossible as no number can go higher than 1.0 or -1.0.
28
Q

Q9 . When a Pearson r correlation coefficient has a negative value (e.g., –0.92), it means that:

A) you obtained negative results about your hypothesis

B) you have a confound in your study

C) there is no relationship between the variables

D) as the value of one variable increases, the other variable tends to decrease

A

D)

  • When a Pearson correlation coefficient (r) has a negative value, such as –0.92, it indicates a negative correlation between the two variables being studied. A negative correlation means that as the value of one variable increases, the value of the other variable tends to decrease, and vice versa.
29
Q

Q10. When conducting hypothesis testing for the Pearson correlation coefficient, r, we calculate degrees of freedom by subtracting 2 from the sample size. In Pearson correlation, the sample size is:

A) the number of scores

B) the number of participants

C) the number of variables

D) all of the above

A

B) # OF PARTICIPANTS

  • In hypothesis testing for the Pearson correlation coefficient (r), the sample size refers to the number of participants or pairs of scores. Each participant contributes a pair of scores for the two variables being correlated. For example, if you have data from 30 individuals and you are calculating the correlation between their scores on two variables, the sample size would be 30.
30
Q

QUIZ 13:

A simple regression allows us to use _____ independent variable(s), while a multiple regression analysis allows us to use _____ independent variables.

A) one; two or more

B) two or more; three or more

C) one; up to three

D) one or two; three or more

A

a) one; 2 +

31
Q

Q2: _____ refers to the tendency of scores that are particularly high or low to drift toward the mean over time.

A) Standard error of the mean

B) Standard error of estimate

C) Regression to the mean

D) Restriction of range

A

C) Regression to the Mean

32
Q

Q3: The regression line is also known as the

A) correlation line

B) intercept line

C) line

D) line of best fit

A

D) LINE OF BEST FIT

33
Q

Q4: How is a correlation different from a regression analysis?

A) A regression enables us to make predictions, while a correlation describes relationships.

B) A regression analysis uses continuous variables, while a correlation analysis uses categorical variables.

C) A correlation describes relationships, while a regression analysis tests for causation.

D) A correlation describes only one type of relationship, while a regression describes multiple relationships.

A

A)

  • Both analyses use two (or more) scale variables. However, regression looks at prediction whereas correlation is simply relation. See also Chapter 16 Prediction versus Relation.
34
Q

Q6. In the simple linear regression formula, the _____ is the predicted value for Y when X is equal to 0, the point at which the line crosses the y axis.

A) intercept

B) residual

C) standard error

D) slope

A

A) INTERCEPT

35
Q

Q7. Dr. Marshall is conducting a correlational study with 32 participants. What is the degrees of freedom for his study?

A) 31

B) 30

C) 34

D) 28

A

B) 30

36
Q

Q8. In an experimental study on the impact of exposure to criticism on self-esteem, exposure to criticism would be the _______ variable.

A) independent

B) confounding

C) replicated

D) dependent

A

A) INDEPENDENT

37
Q

Q9. An independent variable that makes a separate and distinct contribution in the prediction of a dependent variable, as compared with another variable, is called:

A) an orthogonal variable

B) a correlated variable

C) a confounding variable

D) a discrete variable

A

A) AN ORTHOGONAL VARIABLE

  • The concept of orthogonal variables is important in multiple regression. We don’t want to use non-orthogonal variables in multiple regression. For example, including a variable such as a Beck Depression Inventory Score, then using the Major Depression Inventory is no good. These are not orthogonal variables - they overlap.
38
Q

Q10. Sara’s score on the test is the same as the mean for her class. The test scores (variable X) are also correlated with speeding read (variable Y). What would Sara’s predicted z-score for speeding read be?

D) We do not have enough information to answer this question

A) 1

B) 0

C) -1

A

B) 0

  • Don’t forget, if a score falls on the mean, then the z score is zero. Thus, her z score for variable X is zero. This also means her predicted Y score will be at the mean (which also has a z score of zero).

Tricky, right? See also Chapter 16: Determining the Regression Equation