Exam 2 Flashcards

(69 cards)

1
Q

why would you used t instead of z?

A

z is a theoretical distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

df

A

n-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

how do you add more uncertainty?

A

replace the funky o with an s in the SEM equation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what’s the other name for a t-distribution?

A

Student’s t-distributions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

how are t and z-distributions different?

A

t has more area under the curve tp accommodate more uncertainty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

how are t and z-distributions alike?

A

both have normal, bell-shaped curves

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

as df gets bigger how does that affect a t distribution?

A

looks more like a z distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

how is a t table organized?

A

rows are df

columns are probabilities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

single samples

A

no control group, one group of people, used to establish norms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

paired samples

A

one group of people but use two different treatments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

independent t-test

A

2 groups with different treatment but doesn’t assume equal variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

point estimator

A

difference between sample means (X1-X2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what are the 2 ways to calculate degrees of freedom?

A

Welsh method and conservative method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

f-test or Levene’s test

A

variance test to see if two samples are similar

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what do you do if two samples are similar?

A

use an equation that uses a combined variance estimate (gives more degrees of freedom)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what do you do if two samples aren’t similar?

A

use less degrees of freedom

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

ANOVA

A

one-way analysis of variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

ANOVA definition

A

test group means for a significant difference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

2 components of ANOVA

A

variance between groups and variance within groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

MBS

A

mean square between

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

MBS definition

A

quantifies the variance of group means around the group mean (variance between groups)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

MSW

A

mean square within

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

MSW definition

A

quantifies the variability of data points in a group around its mean (estimate of the variance within groups)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

f-statistic

A

ratio of the MSB and MSW

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
post hoc hypothesis
formal tests that are used in delineating
26
2 methods of post hoc hypothesis
least squares difference (LSD) method and bonferroni method
27
LSD method
only used after a significant ANOVA test and planned comparisons
28
Bonferroni's method
ensures that the family-wise error rate is less than or equal to alpha after all possible pair-wise
29
homoscedastic
equal in variance
30
heteroscedastic
unequal in variance
31
three methods of assessing group variances
- graphical exploration - summary statistics - hypothesis tests of variance
32
scedastic
variance of a random variable
33
nonparametric tests
encompass a broad array of statistical techniques used to analyze data
34
rank tests
class of nonparametric test that make fewer assumptions about distributional shape
35
Kruskal-Wallis test
nonparametric analogue of one-way ANOVA
36
family-wise error rate
probability of at least one false rejection of null hypothesis
37
how can you increase the alpha error
multiple tests
38
when do you reject the null hypothesis
>0.05 or the range doesn't include the null
39
what test do you use if you don't know the direction of the alternative hypothesis?
two-tailed
40
confounded correlation
looks like correlation but there's a 3rd thing that causes the correlation
41
regression
how much x explains y
42
LINE
linearity, independent observations, normality, equal
43
what's the slope if there's no correlation?
0
44
what does correlation only apply to?
linear relationships
45
what do you split the Y value into?
residual and predicted
46
explanatory variable (x)
- independent variable - factor - treatment - exposure
47
response variable (y)
- dependent variable - outcome - response - disease
48
correlation coefficient
r
49
least squares regression line
y=a+bx
50
simple regression
single explanatory variable (X) and response variable (Y)
51
multiple regression
multiple explanatory variables (X1, X2 etc) in relation to a response variable (Y)
52
k
number of explanatory variables
53
standardized coefficients
predicted change in Y per unit increase in X
54
residual
difference between observed response and response predicted by regression model
55
why do we use multiple regression models?
helps to "adjust out" the effects of lurking variables
56
what type of variable is an ANOVA?
categorical explanatory variable, quantitative response variable
57
does correlation mean causation?
hell nah
58
coefficient of determination
r^2
59
CoD
amount of y that is explained by x
60
distance of point to the line
residual error
61
slope
change in y per unit of x
62
when do you create dummy variables?
when there are 3+ levels
63
how many dummy variables should there be?
number of levels - 1
64
SEM
standard error of x-bars
65
does the 95% CI get smaller as n increases?
ya
66
when do you use a two-tailed test?
when you don't know the direction of the alternative
67
when is it easier to reject the null?
when the variances are equal
68
when can you use a t-test?
when the data is normal and the n is large
69
family-wise error rate
probability of making a type 1 error