exam 3 Flashcards

1
Q

two-way ANOVA

A

we have two factors, we are interested in the effects of both factors on the same dependent variable, can have ANOVA with 3 or 4 factors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

conditions matrix

A

if we have 2 factors and each factor has 2 levels, we have 4 possible combos of factors, we call this a 2X2 factorial design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

three mean comparison

A

in a 2 factor design, we can test 3 hypotheses all at one: 1) does IV1 have an effect on DV? 2) does IV2 have an effect on DV? 3) does IV1 and IV2 interact to affect the DV?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

main effects

A

first F tests the main effect of factor A as if factor B wasn’t there, second F tests the main effect of factor B as if factor A wasn’t there

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

interaction

A

the third F tests for an interaction, based on whether there is any more variance between groups that we haven’t already accounted for

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

writing about interactions: what to report in two-way ANOVA

A

same that you report in one-way ANOVA; means, F statistic, df, p-value for overall F, effect size (eta squared)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

two-way tables of means

A

report the means for each cell ( combo of two factors) and rows/columns means

  • put levels of factor A in rows and levels of factor B in columns
  • use extra columns to left and extra column on top to include factor names
  • put SD for each condition in parentheses after the mean
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

two-way ANOVA table

A

expand ANOVA table to include SS, df, MS, F and p for each of three tests

  • divide into “between and “within” first then break down between variables
  • add “*” to help reader quickly see which p values are significant
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Writing about two-way ANOVA table

A
  • report the results of each F test separately, but be clear about which effect you’re talking about
  • when you have a significant interaction, you need to explain the form of the interaction
  • then refer reader to tables and figures for further details
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

connecting your hypotheses: two-way ANOVA table

A

as you report your results, link them to hypotheses so reader can keep track of whether you were right or not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

conventions for plotting interactions

A
  • an interaction involves at least 3 variables
  • DV/outcome always goes on vertical axis
  • IV/predictor goes on horizontal axis
  • in ANOVA, IV predictor is categorical
  • other IV/predictor variable is represented by different lines
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

how to interpret interaction plots

A
  • do lines cross? if yes; disordinal interaction, if no; ordinal interaction
  • are lines separated? if yes, consistent with main effect of IV 2
  • do the lines slope? if lines slope up or down thats consistent with main effect of IV1
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

correlational research

A
  • comes in many forms (direct observation, surveys, analysis of existing data),
  • involves more stats than a simple correlation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

correlation vs t-test and ANOVA

A

t-tests and ANOVA are used to compare means of dif groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

when should you use correlation?

A

useful

  • any time predictor variable is continuous
  • any time you have at least 1 continuous variable and no strong expectations about which one predicts the other
  • correlations are symmetrical, it doesn’t matter which variable comes first
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

when to use one-sample t-test?

A

when we know or can assume the pop mean but we don’t know the pop SD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

independent samples t-test

A

comparing two samples with different people in each sample

-also called a between participants design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

basic conceptual formula for any t-test

A

t= difference between means / estimated SE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

standard error

A

error in one sample= SM1= square root of S1^2/n1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

pooled standard error

A

if two samples are not the same size, we need to account for that difference

  • pooled variance= Sp^2= SS+S2/df1+df2
  • we use pooled variance instead of sample variance to calculate the SE
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

the pearson correlation

A
  • indicated by the lower-case r (for sample, p for pop)
  • r= degree to which X and Y vary together/ degree to which X and Y vary separately
  • more shared variability= strong relationship= high correlation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

calculating correlation

A

-we are interested in whether the deviations for one variable are similar to the deviations for the other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

simple linear regression

A

-one predictor= one outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

things we need to accomplish in a regression analysis

A
  • find a straight line that best fits the data (Y=bX+a), what are best values for b and a?
  • evaluate how well that line fits the data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

interpreting coefficients (shape) in simple linear regression

A

b is our regression coefficient; the slope of our regression line

  • tells us how strong the relationship is between our predictor and our outcome
  • large/small b= change in the predictor corresponds to a large/small change in the outcome
  • positive/negative b- an increase in the predictor corresponds to an increase/decrease in the outcome
  • a, the intercept, tells us what value we’d expect for the outcome if the value of the predictor was zero
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

effect sizes for regression

A
  • we can calculate a standardized regression coefficient by transforming all of the raw scores to z-scores before we begin the analysis
  • fancy b
  • we also can find r^2 for our regression model
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

multiple regression

A

-a regression analysis involving more than one predictor variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

overall effect size for multiple regression

A
  • R^2= the amount of variance in the outcome

- so a larger R^2 means we’ve done a better job explaining/predicting the outcome variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

standard error of estimate

A
  • average error of prediction

- tells us how accurate our predictions are when we predict based on all the variables in the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

writing about regression; what to report

A
  • include results of significance test: F stat, df ( regression, residual)
  • coefficients in the regression equation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

ANCOVA

A
  • combines regression and ANOVA
  • test the effect of grouping variables after accounting for continuous variables
  • logic: “if i already suspect that Y is probably related to C, does X add anything to my understanding of Y beyond what I get from C?”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

ANCOVA process

A
  • start with regression model predicting SV from covariates

- use ANOVA to understand residual variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

interpreting ANCOVA

A

in ANCOVA, we get F statistics for:
-each main effect
-each covariate
-the interaction (if we have two factors)
we compare them to all appropriate critical values and/or compare the exact p-value for each to our a level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

writing about ANCOVA: what to report

A

when writing about ANCOVA, include results of all F tests

  • F stat
  • df (regression, residual OR between, within)
  • p-value for overall F
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

effect size and shape: ANCOVA

A

to interpret shape, look at similar stats
-for your factors: means
-for your covariate; correlation and regression with the DV
for effect size, you can find partial eta squared for each effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

discussion section

A
  • what you found
  • what you think it means
  • why you think it matters
  • what should they keep in ind when interpreting results
  • how your findings might be of practical value
  • what next steps are for future research
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

contributions

A

before you dissect what could have been better about your study, talk about why it has value as is: why was this study worth doing, what does the field gain from this study?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

strengths

A

reasons reader should have confidence in your study

-internal and external validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

limitations

A

-boundary conditions
-possible limitations internal validity: being unable to control important individual differences, realizing after the fact that you had confounding variables, having too little stat. power or too much noise in data
possible limitations external validity; having unrepresentative sample, having measures too far removed from real world, having manipulations that are too weak or don’t work

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

future directions

A

help reader see what comes next, what is next logical step in addressing your research question?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

post hoc analyses

A
  • extra analysis to rule out alternative explanations
  • NOT part of hypotheses
  • NOT same as post hoc that follows ANOVA
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

why do post hoc analyses?

A

-helps you address “what if’s”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

risk of post hoc analyses

A
  • may not have exactly the data you want

- risk of capitalizing on chance

44
Q

moderating variables

A
  • moderation occurs when one variable changes the effect of another variable on an outcome
  • same thing as an interaction!
45
Q

testing moderation

A
  • look for interactions
  • remember there are three variables in an interaction or moderating relationship
  • moderator and the IV/predictor do NOT influence one another
46
Q

challenges in findings moderators

A
  • moderating effects are too small
  • choosing which variable is the focal variable vs the moderator is sometimes messy
  • interactions can be difficult to interpret
47
Q

mediating variables

A
  • if moderation if about when one variable affects another, mediation is about HOW
  • “if variable M is casually situated between X and Y and accounts for their association ( at least in part), we can say that M mediates the relationship
  • think “media”= middle
48
Q

how mediation works

A

a mediating variable explains how we get from one variable to another
-“because…”
-“due to…”
-“by…”
like links in a chain, if we break the link from X to M, then Y doesn’t happen

49
Q

testing mediation

A

use path models or structural equations models

-this allows us to test the relationships among all of our variables at the same time

50
Q

path models and structural equations models

A
  • both are ways to visualize and test a set of hypotheses among multiple variables at the same time
  • solve a set of simultaneous regression equations
51
Q

path models

A
  • deal with observed variables- the data just as we collected it
  • we represent the observed variables with boxes
  • represent relationship with arrows (one-headed arrow= variable on the left causes or influences the variable on the right, two-headed= variables are correlated, but we don’t know direction)
52
Q

Structural equations model (SEM)

A
  • just like path models, but with latent variables rather than observed variables (observed= exactly what we saw/counted, latent= something we infer from the data)
  • we use latent variables when we aggregate items together to form an overall score
53
Q

SEM diagrams

A

-we use ovals to represent latent variables (author used a multi-item measure)

54
Q

evaluating a path or structural model

A
  • does the overall model fit the data well?
  • interpret path coefficients (just like regression coefficients, they describe the strength of each path (arrow) in the middle)
55
Q

meta-analysis

A

-you systematically review existing literature and then quantitatively combine the results

56
Q

advantages of meta-analysis

A
  • they can settle controversy
  • more likely to incorporate broader range of studies
  • allow us to get a more accurate effect
  • can also test for moderators
57
Q

concerns about meta-analysis

A
  • still not perfectly objective

- for some questions, we get neat, consistent relationships across studies

58
Q

reading a meta-analysis

A
  • what effect size are they using (cohens d, r^2, and eta squared are all popular)
  • pay attention to CI’s as well as effect size estimate
  • number of studies (k) is more important the number of participants (n)
59
Q

eta squared

A

% of variance in the DV that is explained by one or more IVs in the context of ANOVA

60
Q

eta squared

A

tells us what proportion of the variance in our outcome is explained, or predicted, by our grouping variable

61
Q

r^2

A

proportion of variance in one continuous variable that is explained or predicted by ONE other continuous variable

62
Q

R^2

A

proportion of variable in one continuous variable that is explained or predicted by a SET of continuous variables

63
Q

sampling error

A

the natural discrepancy, or amount of error, between a sample statistic and its corresponding population parameter

64
Q

central limit theorem

A

for any population with true mean and SD, the distribution of the sample means for sample size n will have a mean of true mean and SD/ sqare root of N and will approach a normal distribution as n approaches infinity

65
Q

law of large numbers

A

the larger the sample size (n), the more probable it is that the sample mean will be close to the population mean

66
Q

standard error

A

average sampling error

67
Q

sampling error

A

the difference between one sample mean and the population mean

68
Q

interpreting z-scores

A

pay attention to

  • sign (positive=above mean, zero=mean, negative= below mean)
  • value- close or far from zero?
69
Q

p-value

A

precise probability of a statistic as extreme or more extreme than what we have

70
Q

type 1 error

A

we are concluding there is an effect when there really is no effect; reject null when null is really true

71
Q

type 2 error

A

we are concluding there is no effect when really there is an effect; accept null when null is really false

72
Q

avoiding type 2 error

A

start in design state

  • pilot test your manipulations and measures
  • think carefully about study administration and potential confounds
  • recruit adequate sample
73
Q

power

A

the probability that we will correctly reject the null when the null is really false; the likelihood that we find an effect that is really there

74
Q

traditional framework for inference tests

A
  • state hypothesis
  • choose a level
  • based on a, find critical region and critical value for test stat
  • compute test stat
  • compare test stat to critical value
75
Q

modern framework for inference testing

A

w/modern software, we can easily find exact probability the we would obtain our exact test stat if null were true

  • state hypothesis
  • choose a level
  • compute test stat ( find exact probability of test you obtained)
  • compare exact p-value to a level (if p<a></a>
76
Q

t distribution

A

-the set of t-scores for all possible samples of a particular size (n)

77
Q

three variations on t-tests

A

-mean of one sample with a known pop mean (one sample)
mean of one sample with a mean of a different sample (independent sample)
-mean of one sample and mean of sample that is matched or connected to the first in some way (paired or dependent samples)

78
Q

t

A

difference between means/ estimated SE

79
Q

one tailed test

A
  • if we make a directional prediction, we’re only right if the effect goes in the direction we predicted
  • if we’re interested in only one end of the distribution
80
Q

two-tailed test

A

if we’re interested in values at either end of the distribution

81
Q

one-tailed test and critical region

A
  • critical region is all at the same end of the distribution
  • we will only reject H0 when our sample mean is dif. from pop mean in the direction we predicted
  • has more power than two-tailed
82
Q

two-tailed test and critical region

A
  • critical region is divided between two ends
  • we can reject H0 with a smaller difference than the mean
  • has less power than one-tailed
83
Q

when would you use a one sample t-test?

A

when we know or can assume the population mean but we don’t know the population SD

84
Q

dependent samples t-test

A

-we have to account for lack of independence

85
Q

designs with dependent samples

A
  • repeated measures (or within-participants)

- matched participants (or matched pairs)

86
Q

confidence interval

A

a range of values centered around the sample statistic, the width of the interval gives us a certain degree of confidence that the pop parameter is inside that interval

87
Q

writing about t-tests: what to include

A
  • alternative hypothesis in conceptual terms
  • what kind of t-test you’re using
  • alpha level you’re using and whether you conducted a one or two-tailed test
  • df
  • exact t stat you calculated
  • p value for t stat
  • reject or don’t reject H0
  • effect size (cohen’s d)
88
Q

ANOVA

A

takes same logic as t-tests and extends it so we can compare two or more means

89
Q

ANOVA; between-treatments variance

A

-differences between group means

90
Q

ANOVA; within-treatments variance

A

-normal variability, regardless of grouping variable

91
Q

F

A

systematic group differences + random, unsystematic variability/ random, unsystematic variability

92
Q

one-way ANOVA

A

ANOVA with just one factor (grouping variable)

93
Q

eta squared

A

percentage of variance in the DV that is accounted for by the IV
eta squared= ssbetween/sstotal

94
Q

multiple comparisons problem

A
  • when the overall F stat is significant, we know that at least 2 of the means are different
  • but we don’t know yet whether only the extremes are different or if there are also differences among the means in the middle
95
Q

omnibus test

A

tests the equality of several means all at the same time

96
Q

post hoc tests

A

when we test all possible pairs of means for differences

-differences we didn’t specifically predict

97
Q

Tukey’s honestly significant difference

A
  • square root of MSwithin/n
  • only works for equal sample sizes
  • easier to calculate than sheffe
98
Q

Sheffe test

A
  • calculates and separates MSbetween for every pair of groups
  • to get a significant F with Sheffe, the difference between 2 groups has to be large enough to get an F that exceeds our critical value for the whole study
  • more conservative than Tukey (sets a strict standard for type 1 error)
99
Q

writing about ANOVA: what to report

A
  • F statistic
  • DF (between and within)
  • P-value for overall F
  • effect size (eta squared)
100
Q

writing about post hoc tests

A
  • only report these if overall F is significant
  • be clear which post hoc test you used
  • having a clear, organized table of means allows your reader to make sense of this at a glance
101
Q

three F tests: two way ANOVA

A
  • one for main effect of factor A
  • one for main effect of factor B
  • one for interaction
102
Q

interactions

A
  • occurs when the effect of one IV/predictor variable on the DV changes depending on the level of another IV/predictor variable
  • “it depends” findings
103
Q

calculating for repeated measures

A

in a repeated measures design, we substitute MSerror for MSwithin in the F statistic
F=between/groups variance/within groups variance= MSbetween/MSerror

104
Q

effect size

A

a measure of effect size is intended to provide a measurement of the absolute magnitude of a treatment effect, independent of the size of the sample being used

105
Q

effect size

A

tells us how big the effect is

106
Q

statistical significance

A

tells us that we can be confident that the effect we see- whatever size- is real
-when we reject the null, we describe our findings as statistically significant- this means that it is unlikely we would have observed our result if the null hypothesis were true