Midterms Flashcards

(20 cards)

1
Q

What are the three approaches to an ANOVA

A
  1. Regression
  2. Comparison of ANOVA models
  3. Partitioning sum of squares
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a one-way between-subjects design?

A

Groups are independent from one another and vary on a single factor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the assumptions of a one-way between-subjects ANOVA?

A
  1. homogeneity of variance: as long as the group with the largest variance is less than 4-5 times the group with with smallest variance and samples are equal and not too small it should be fine; if variances are too different adjust F cutoff score so that it’s more stringent
  2. normality: robust unless heavily skewed in opposite directions or skewed with small sample size or unequal sample size
  3. independent values between and within group
  • When assumptions hold, ANOVAs is the most powerful omnibus test
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Cookbook for hypothesis testing

A
  1. Establish null and alternative hypothesis
  2. Collect data
  3. Choose test statistic F with known distribution:
    * F : between group variance / within group variance
    (if null is true the ratio should be 1)
    * F is always positive and the distribution is positively skewed
    * F doesn’t indicate which means differ
    * Distribution varies with the dfbetween, dfwithin
  4. Calculate test statistic using observed data for F as fo
    * But first you must check normality, homogeneity, independence
    * When calculating F: SSbetween/dfbetween / SSwithin/dfwithin = MSbetween/MSwithin ~ F(dfbetween, dfwithin)
    SStotal = SSbetween + SSwithin
    * SStotal = sum of (xi - grandmean)^2
    * SSbetween = sum of Nj
    (Mean of j - grand mean)^2
    *SSwithin = sum of (xi - mean of j)^2
    *dfbetween = # of groups - 1
    *dfwithin = # of all n - # of groups
  5. Calculate probability for p=P(F>=fo)
    *critical score = qf(0.95, dfbetween,dfwithin)
    *alpha = 1-pf(f score,dfbetween,dfwithin)
  6. If p < alpha, reject the null
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the advantages and disadvantages of partitioning of sums / regression / ANOVA model comparisons?

A

Partitioning of sums:
* Advantages: computational simplicity
* Disadvantages: Not easily transferable to advanced models, need to come up with new formulas all the time so poor conceptual integration, problematic for repeated measures/unequal n designs
Regression:
*Advantages: can use continuous and categorical variables as predictors
*Disadvantages: difficulty in dummy coding in factorial designs, hard to create contrasts, inflexible choice of error terms, does not generalize easily to repeated measures design
Model comparison:
* Advantages: can be generalized to advanced models, well suited for repeated measure designs, same basic formula for all designs
*Disadvantages: not as familiar

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the components of GLM

A

sum of effects for other factors + sum of effects for allowed-for factors + baseline = observed value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What’s the df for model comparisons?

A

of independent observations - # of independent parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do you calculate F in model comparisons?

A

(Ef - Er) / (dff - dfr) / (Ef/dff) = F

qf(0.95,dff,dfr) = cut off

alpha = 1-pf(F,dff,dfr)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In model comparison F what does the denominator represent?

A

weighted average of the within-group variances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what determines the adequacy of a model?

A

How much error must be increased for us to consider the restricted model (null) to be significantly worse than full model ((Er-Ef)/(dfR-dfF)) / (Ef/dfF)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is Er - Ef equal to?

A

SSbetween

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why is it that you can get significant results with very small effect size if you have a large n?

A

Large n will lead to very small standard error estimates (denominator) even if the difference between the means (numerator) is small

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what are ways to measure effect size

A

mean differences; estimated effect parameters; standardized difference between means; correlational measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what are the standardized differences between means

A

cohen’s d, cohen’s f

they have CIs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why is omega square smaller than R^2

A

R square can overestimate the strength so you correct for it by making numerator smaller and denominator bigger

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

When testing for homogeneity of variance and the group with small variance has large n, what happens to F value and error rate?

A

F values becomes inflated because the pooled estimate of the population variance in the denominator becomes really small so the F formula denominator becomes very small and increases your change of Type I error.

17
Q

Ways to check normality and homogeneity of variance

A

Shapiro-wilk test, K-S test, skewness, kurtosis

Levene’s test, O’Brien’s test

18
Q

Alternatives to consider if ANOVA assumptions are violated

A

Transform data, choose another analytical method that is more robust against assumption , use median instead of mean, remove outlier

19
Q

What are the four building blocks to an experiment

A

UTOS

units, treatments, observations/measures, settings

20
Q

List the construct validities

A

Statistical conclusion validity (validity of inferences of correlations/covariance), construct validity (validity of inferences about the higher order constructs) , internal validity (causality between A and B), external validity (validity of inferences about whether the cause and effect relationship holds over variation in context