Stats and research Flashcards

1
Q

idiographic

A

single subject

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

nomothetic

A

group studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

AB design problem

A

threat of history - hard to tie intervention to change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

ABAB design problems

A

protects against threat of history
threat of failure to return to baseline
ethics of stopping intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

analogue research vs. clinical trial

A

less generalizability vs methodological compromise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

cross sectional design problem

A

cohort effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

longitudinal problems

A

expensive, attrition rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

cross-sequential research

A

combine longitudinal and cross sectional

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

proportional sampling

A

randomly selected in proportion to representation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

systemic sampling

A

every nth random

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

cluster sampling

A

naturally occurring groups, but everyone within the groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Threats to internal validity

A

anything other than IV causing change in DV (history, maturation, test practice, instrumentation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Solomon four-group design

A

Corrects for test practice
pre/post vs. post only
intervention vs. no intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Instrumentation

A

threat to internal validity
change in observer or equipment resulting in change to DV. Correct with control group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Statistical regression/ regression to the mean

A

extreme scores get less extreme
manage with control group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

selection bias

A

correct with random assignment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

attrition/experimental mortality

A

problem if differential in group dropout rates
Compare groups who drop out with t tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Diffusion

A

no-treatment group gets some treatment indirectly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Experimental expectations/ Rosenthal effect

A

Experimenter transmit clues. correct with experimenter blindness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Demand characteristics

A

features of procedures that suggest expectations to participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

John Henry effect/compensatory rivalry

A

control group work harder to compete with experimental group. Try to keep control and experimental group unaware of each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Threat to external validity

A

sample characteristics, stimulus characteriastics, contextual characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Sample characteristics

A

diff between sample and pop

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Stimulus characcteristics

A

features of study associated with intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Contextual characteristics

A

conditions in which intervention embedded. Reactive/Hawthorn effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Threats to statistical conclusion validity

A

low power, unreliability, variability in procefures,. subject heternogeity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

positive skew

A

peak to left

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

neg skew

A

peak to right

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

standard error of mean

A

all possible means from random assignment groupings, how far they deviate from middle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Central limit theorem

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Rejection region / region of unlikely value

A

area corresponds to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Type 1 error

A

mistakenly rejecting Ho
proportionate to alpha

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Type 2 error

A

mistakenly adopt Ho
Inverse with alpha

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Choosing based on DV

A

non-parametric if nominal/ordinal
parametric - t-test, anova
1+ DV - MANOVA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

independent vs correlated data

A

independent if randomly assigned, or based on existing categories (e.g., gender)
Correlated if same-subject repeated measures, matched to groups, inherent relationships between ppl in each group

35
Q

parametric test assumptions

A

interval or ratio data
homoscedascity - -similar variance between groups
normally distributed data

36
Q

chi square test assumptions

A

independent observations - no repeated measures

37
Q

Degrees of freedom

A

possible variations in outcome
Chi square: groups -1
Multiple chi square: (rows-1)*(columns-1)
Single sample t-test: N-1
Matched/correlated t-test: #of pairs -1
T-test for independent groups: N-2
single ANOVA: N-1, #of groups -1,

38
Q

post-hoc tests

A

scheffe, then tukey are best for protection from type 1
Fisher’s LSD best for proection from type 2
vice versa

39
Q

two-way anova benefit

A

allows analysis of interaction effects
F ratios for each IV and each interaction
interaction effects must be examined first

40
Q

MANOVA vs multiple ANOVAs

A

reduce type 1 error

41
Q

coefficient of determination

A

correlation coefficient squared, explains variability in Y accounted for by X

42
Q

Simple regression line of best fit

A

uses least squares criterion

43
Q

assumptions for bivariate correlations

A

linear relationship between x and y, homoscedascity, unlimited ranges for x and y

44
Q

eta

A

use when X and Y relationship is curvilinear

45
Q

Multicollinearity

A

multiple predictors in multiple regression are highly correlated with each other/redundant

46
Q

stepwise regression

A

computer generated
forward - add one predictor at a time until no change to r2
backward - remove one predictor at a time, weakest first, until there is change to r2
Fewest possible predictions

47
Q

higerarchical regression

A

examiner adds predictors by hand according to theoretical model

48
Q

Canonical R

A

correlation between 2+ IVs and 2+ DVs (predictor set and criterion set)

49
Q

Discriminant function analysis

A

2+ IV, 1 nominal DV

50
Q

Loglinear/Logit analysis

A

2+ categorical IV, 1 categorical DV

51
Q

Path analysis

A

Causality
multiple regressions to test causality model

52
Q

structural equation modeling

A

inferences about casality
e.g., LISREL (linear structural relations)

53
Q

Factor analysis

A

reduce several variables into fewer factors
1st factor always strongest

54
Q

eigenvalue/characteristic root

A

factor strength
1+ is significant

55
Q

correlation matrix

A

table of intercorrelations among tests/items

56
Q

orthogonal rotation

A

produces factors that have no correlations with each other, easy to interpret
communality - factor loadings squared and added together

57
Q

oblique rotations

A

factors are correlated like in real world

58
Q

principal components analysis

A

no empirical/theoretical guide on communality values
produces uncorrelated factors called components
1st factor accounts for most var

59
Q

principal factor analysis

A

communality value confirmed before analysis

60
Q

Cluster analysis

A

look for naturally occuring groups of DVs
No a priori hypothesis

61
Q

minimum acceptable reliability is

A

0.8

62
Q

Spearman-Brown prophesy formula

A

tells how much more reliable test would be if it was longer

63
Q

on timed tests, preferred reliability measure is

A

alternate tests, then test-retest

64
Q

power test

A

items of varying difficulty

65
Q

Kuder-Richardson

A

split-half reliability for dichotomous repsonse formats
KR 20 when varying difficulty
KR 21 when equal difficulty
error due to content sampling, test heterogeneity

66
Q

Cronbach’s

A
67
Q

Interater reliability

A

Pearson’s r, % agreement, Yule’s Y, kappa statistic

68
Q

standard error of measurement

A

SD of normal distribution from tested 100s of times theoretically
reliability

69
Q

standard error of estimate

A

validity
SD of normal distribution from being tested 100 of times theoretically on criterion

70
Q

Item characteristic curve

A

Rel between item score and total score

71
Q

Item response theory

A

to what extent does specific item correlate with underlying construct

72
Q

cross validation

A

give test to new sample
shrinkage of criterion-related validity coefficient

73
Q

correction for attenuation

A

calculates how much more valid it would be if predictor and criterion always reliable

74
Q

criterion contamination

A

happens for subjective criterions, rater informed of subject’s predictor scores before assigning criterion ratings

75
Q

ANCOVA

A

extra/moderator variables partialled out
Looking for group diff, not factors

76
Q

Spearman’s rho/kendall tau

A

corr between ordinal variables

77
Q

pearson r

A

interval/ratio

78
Q

point biserial

A

corr between true dichotomy and interval/ratio

79
Q

biserial

A

corr between artificial dichotomy and interval/ratio

80
Q

phi

A

corr between two dichotomies

81
Q

tetrachoric

A

corr between two artificial dichotomies

82
Q

Taylor Russel tables

A

Moderate base rate, low selection ratio, incremental validity

83
Q

slope bias

A

occur when there is differential validity - i.e., when the validity coefficients for a predictor (e.g., cognitive ability test), differ for different groups. Consequently, the predictor is more accurate for one group than for another.

84
Q

intercept bias

A

Intercept bias (or unfairness) occurs when the validity coefficients and criterion performance for different groups are the same, but their mean scores on the predictor differ. As a result, the predictor consistently over- or under-predicts performance on the criterion for members of one of the groups.

85
Q
A