Summa Week 9 Flashcards

(128 cards)

1
Q

When we want to explore whether the effects of different treatments influence the dependent measure, we can use tests of

A

t-test - two means, and one predictor and one independent variable
ANOVA - an extension of t-test
- compares several means
- can manipulate lots of IVs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

If we want to compare several means why don’t we compare pairs of means with t-tests?

A

can’t look at several independent variables and inflates the Type I error rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is PC?

A

error rate per comparison

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

PC is the prob of making a ______ error on a ____ comparison, assuming the null hypothesis is ____

A

Type I
single
true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

If alpha = 0.05, there is a 5% chance that you are rejecting the null hypothesis _______

A

incorrectly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

If we ran a bunch of t-tests of a = .05 then the per comparison error rate would be

A

.05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

FW?

A

error rate familywise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

FW is the prob of _____ rejecting at least one null hypothesis in a family of c comparisons, assuming that each of the c null hypothesis is ____ in a set (or family) of comparisons

A

incorrectly

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Familywise alpha is

A

1-(1 - a’)^c

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

FW where a’ is the

A

per comparison error rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

FW where c is the

A

number of comparisons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

When we have k = 6 (k is the number of experimental conditions), we will have c =

A

c = 6*(6-1)/2 = 15 comparisons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

If we have a error rate per comparison of a’ = .05, then familywise alpha is

A

FW = 1 - (1-.05)^15 = .537

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The aim of ANOVA is to determine if treatment effect is present by comparing ______ and _______ _____

A

errors

treatment effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is error also known as?

A

random variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are random errors?

A

individual differences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are measurement errors?

A

problems of accurately collecting data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is systematic variance?

A

treatment effect, the action of the IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

When the population means are equal, the differences among the group means will reflect the operation of _________ _____ alone (no ______ _______)

A

experimental error

treatment effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the theory of ANOVA?

A

SS total = SS treatment + SS error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

SS total =

A

total sum of squares = variability between scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

SS treatment =

A

model sum of squares - variability due to the experimental manipulation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

SSerror =

A

residual sum of squares - variability due to individual differences in performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the F-ratio?

A

MStreatment/MSerror

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
If the model explains a lot more variability than it can't explain then the experimental manipulation has had a _______ _______ on the outcome (DV)
significant effect on the outcome
26
What is within-groups variability?
within-groups is intragroup
27
what is between-groups variability?
between-groups is intergroup
28
How many degrees of freedom impact the shape of the F distribution?
2
29
Is ANOVA two-tailed?
No, only positive
30
Like a t-test, ANOVA tests the null hypothesis that the ______ are the same
means
31
What is the experimental hypothesis?
means differ
32
What type of test is ANOVA?
an Omnibus test
33
Omnibus tests are described as
test for overall difference between groups group means are different significant difference is not determined
34
Assumptions for 1-way ANOVA: | RANDOM SAMPLING
each sample is a random sample from its population
35
Assumptions for 1-way ANOVA: random sampling robustness?
considered inappropriate to conduct if violated, but some argue it is robust if violated
36
Assumptions for 1-way ANOVA: independence of cases
each case is NOT influences by other cases, and NOT robust to violations
37
Assumptions for 1-way ANOVA: normality
the DV is normally distributed in each pop, provided the sample size (N) is large and the n of the groups is EQUAL
38
What is assumed for 1-way ANOVA to be robust?
sample size is large, and group sizes are equal
39
Assumptions for 1-way ANOVA: HoV
the degree of variability in the pop are equivalent
40
Assumptions for 1-way ANOVA: HoV robustness
robust to violations if the sample size is large and groups are about equal
41
What are the 2 types of means of interest in ANOVA?
marginal means (overall mean of each group) and the Grand Mean, or M..
42
What is the sum of squares for SStotal
the difference of the individual scores and the marginal means (within/sserror) and the marginal means and the grand mean (between/sstreatment)
43
What is df for sstreatment?
df = k-1
44
What is df for error?
df = N - k
45
What is df for sstotal?
df = N -1
46
What is the formula for ANOVA?
F(dfbetween, dfwithin) = MStreatment/MSerror
47
What is MStreatment?
SStreatment / dftreatment = MStreatment
48
What is MSerror?
SSerror/dferror = MSerror
49
What is SS total?
SSbetween (sum of squares of the model) + SSwithin (sum of squares of individual differences)
50
If F value is _____ than F Critical value, we can reject our ___ hypothesis and conclude that not all group/sample ________ are equal
greater null means
51
What is a condition of ANOVA?
we don't know where exactly the differences lie in calculations
52
How do we determine where differences in means occur?
using post-hoc tests
53
What is on ZW's favourite slide?
SStotal (total variance in the data) = SStreatment (variance due to the treatment) + SSerror (errors in model)
54
what is the calculation for SStotal?
``` sum of (marginal mean for group 1 - grand mean)^2, or.... sgrand^2 x (N-1) ``` e.g. M... = 3.467 marginal means are 2.20, 3.20, and 5.00 sgrand^2 = 3.124 n = 5, k = 3, therefore N = n x k = 15 so SStotal = 3.124(15-1) = 43.74
55
What is the calculation for SS total?
sgrand^2 x (dftotal)
56
What is the calculation for SStreatment?
sum of (total partic # group 1 x (marginal mean group 1 - grand mean)^2 + (total particip group 2 x (marginal mean group 2 - grand mean)^2 + (total particip group 3 x (marginal mean group 3 = grand mean)^2 = 5(2.2 - 3.467)^2 + 5(-0.267)^2 + 5(1.533)^2 = 8.025 + 0.355 + 11.755 = 20.135
57
What is the calculation for SSerror?
= sum of (stat group 1 - marginal mean group 1)^2 = sum of (standard variance group 1 x (n - 1) e.g. M... = 3.467 marginal means are 2.20, 3.20, and 5.00 sgrand^2 = 3.124 n = 5, k = 3, therefore N = n x k = 15 so... =sgroup1^2 (n1-1) +sgroup2^2... = 1.70(5-1) + 1.70(5-1) + 2.50(5-1) 23.60
58
1-way ANOVA dftotal =
N - 1
59
dftreatment 1-way anova is
k - 1
60
dferror 1-way anova is
N-k
61
SStotal =
sstreatment + ss error
62
dftotal =
dftreatment + dferror
63
MStreatment =
SStreatment/dftreatment
64
MSerror =
SSerror/dferror
65
F(dfbetween,dfwithin) =
MStreatment/MSerror
66
If F value is ______ than F critical value, we can _____ our null hypothesis and clucde that...
great reject not all group/sample means are equal
67
What is the downside to 1-way anova?
we don't know where the differences lie, therefore we use post-hoc tests
68
Significant t and F ratios show that there is ____ ____ of the treatment, a real _____ between the groups that _____ be explained by chance
a real effect difference cannot
69
effect size measures how
big the effect of the treatment is
70
a significant effect depends on
size of the mean differences size of the error variance degrees of freedom
71
How to determine raw effect size (looking)
just looking at the raw difference between groups
72
how to determine raw effect size (depending)
can be illustrated as the largest group difference or smallest
73
how to determine raw effect size (comparisons)
CANNOT be compared across samples of experiments
74
Standardized effect size ...
expresses raw mean differences in standard deviation units
75
Another name for standardized effect size is
Cohen's d
76
What is a small, medium and large effect for Cohen's d, respectively?
.2, .5, .8
77
What is eta-squared?
an OVERESTIMATION of the degree of overlap in the population n^2 = SStotal -SSerror/total, or... = SStreatment/SStotal
78
What is omega--squared?
a better estimate of the percent of overlap in the population than eta squared, it corrects for the size of error and the number of groups
79
What is the formula for omega squared?
oo^2 = (SStreatment - (k-1)MSerror) / (SStotal + MSerror)
80
n^2 is a...
sample estimate of the proportion of the variance in the DV that is accounted for by the IV
81
What do you use for POPULATIONestimates of effect size?
oo^2, or omega squared
82
What is partial eta-squared?
the proportion of the total variability attributable to a given factor
83
npartial^2 formula
SStreatment / SStreatment + SSerror
84
partial omega squared?
oo^2 = SStreatment - (k-1)MSerror / SStreatment + (N - (k-1))MSerror
85
What is effect size for correlation?
r, or Pearson's correlation
86
What is the small, medium and large effect size for correlation?
r, which is .1, .3 and .5
87
What is effect size for ANOVA (first of 2)?
eta squared
88
What is the small, medium, and large effect size for ANOVA (first of 2)?
n^2, 0.01, 0.06 and 0.14
89
What is effect size for ANOVA (second of 2)?
omega squared
90
What is the small, medium and large effect size for ANOVA (second of 2)?
omega squared, 0.01, 0.06, and 0.14
91
What is effect size for t-tests?
Cohen's d
92
What is the small, medium, and large effect sizes for t-tests?
Cohen's d 0.2, 0.5, 0.8
93
What is effect size for 2 x 2 tables?
odds ratios
94
What is the small, medium, and large effect sizes for 2 x 2 tables?
odds ratios, 1.5, 3.5, 9.0
95
What are Welch statistics?
when the Levene's F test reveals HOV assumption is NOT met (i.e. p <= .05), then the Welch's F test should be used
96
How do you get Welch's F in SPSS?
Analysis - Compare Means - 1-way anova - option, and use the F under statistic a, as well as the new df and significance score. ALWAYS CREATE WHOLE NUMBERS FOR DFS
97
There are many effect siz measures that indicate the amount of total variance that is accounted for by the effect. What does no relationship look like?
DV and A circles are completely disjointed
98
There are many effect siz measures that indicate the amount of total variance that is accounted for by the effect. What does a small reltaionship look like?
DV and A circles are just barely touching
99
There are many effect siz measures that indicate the amount of total variance that is accounted for by the effect. What does a moderate relationship look like?
DV and A circles are touching about 1/4 of their surface area each
100
There are many effect siz measures that indicate the amount of total variance that is accounted for by the effect. What does a strong relationship look like?
DV and A circles are touching more than 1/2 of their surface areas
101
wHAT IS THE proportion of variance accounted for by the regression model?
R^2
102
Multiple R^2 is equal to
eta-squared
103
Adjusted R^2 is equal to
omega-squared
104
What is the formula for R^2?
SStreatment/SStotal (omega-squared)
105
Why would you select levels of the IV that are very different?
to increase the effect size and make the study more powerful
106
What can be more liberal to create a more pwoerful study?
alpha level
107
What can you reduce for designing a powerful study?
reducing error variability
108
What would you compute for the necessary amount for adequate power when designing powerful studies?
the sample size
109
How do you access effect size for 1-way ANOVA studies?
Analyze - Univariate, define DV and define IV and check for fixed or random models then click options
110
What do you need to ensure before preceding with the F value data?
that Levene's test of equality of error variance is insignificant, meaning that the variance between groups are homogeneous and that we can assume that the test for the DV is equal across groups
111
Why is ANOVA an omnibus test?
it tests for overall difference between groups, tells us that group means are different, yet sadly does not say where exactly the significant difference lies
112
What's the deal with post-hoc tests?
they are done after ANOVA doing pairwise comparisons to control FW
113
When are post-hoc tests appropriate?
only when you are doing exploratory research (a.k.a fishing for significance)
114
How many post-hoc tests are there for our interest?
5
115
What is the Bonferroni method?
a type of posthoc test that minimizes the familywise alpha
116
What are the 5 posthoc tests to discern where the difference in the means lie in an ANOVA?
1) Bonferroni 2) Tukey's HSD 3) Dunnett's C 4) Scheffe's test 5) Fisher's LSD procedure
117
What are priori comparisons?
planned comparisons before the data are collected usually with an idea of what to expect
118
What do planned comparisons almost never involve?
very many of the possible comparisons. It is a really bad idea to do pairwise t-tests among all pairs of means
119
If the comparisons are planned, then you test them without
any correction. Each F-test for the coparison is treated like any other F-test. You look up an F-critical value in a table with dfCOMP AND DF ERROR
120
How do you do a priori comparisons with correction?
a t-test by using MSerror and tcritical at dferror
121
When do you use Bonferroni t-test (Dunn's test)?
with correction and a t-test by using MSerror and tcritical at dferror
122
Which a priori comparisons require equal sample sizes?
Bonferroni and Dunn's tests
123
What is trend analysis?
if you have ordered groups, you often will want to know whether there is a consistent trend across the ordered group (e.g., linear trend)
124
How do you tell whether there is a conssitent trend across the ordered group?
a linear trend
125
When does trend analysis come in handy?
there are orthogonal weights already worked out depending on the number of groups
126
What does trend analysis depend on for the number of groups?
orthogonal weights are already worked out
127
When is trend analysis best done/
as a PLANNED comparisons, although can be done posthoc
128
Reporting ANOVA in APA format: In this study, a random sample of states from each of the census reigions were taken and the average salary in each clinical counseling, and school psychologists was record mean salary in the Northeast region was $77,730 (SD = $1,030), $63,550 (SD = $930) in the Midwest, $61,370 (SD = $1039) in the South and $68,830 (SD = $870) in the West. The overall effect of analysis of variance showed a statistically significant effect in the region, F(3, 120) = 3.52, p = .049, oo^2 = 0.32.
Howeverpost-hoc testing showed that the onlly statistically significant difference that existed was between the Northeast and the South -- the salaries for clinical, counseling, and school psychologists are statistically higher in the Northeast than in the South (Tukey's HSD, p < .05). The results of this study suggest that it might be worthwhile for a psychologist living in the South to move to the North, but there would be noadvantage to moving to the West or Midwest. If this study is replicated, it would be advisable to take into account the cost of living as higher salaries in a region may be mitigated by a higher cost of living.