Critiquing research findings Flashcards
(32 cards)
descriptive statistics
describe/synthesize data about the sample and the study variables
- frequency distribution
- measures of central tendency (mode/mean/median)
- measures of dispersion (range/variance/SD)
inferential statistics
make inferences about population based on sample data
test hypotheses
answer research questions
parametric statistics
a class of statistical tests that involve assumptions about the distribution of the variables and the estimation of a parameter
data are NORMALLY DISTRIBUTED
nonparametric statistics
a class of statistical tests that do NOT involve stringent assumptions about the distribution of critical variables
data are NOT NORMALLY DISTRIBUTED
null hypothesis
rejected if relationship is statistically significant (p < 0.05)
accepted if relationship is NOT statistically significant (p = 0.05 or greater)
p-value
probability of rejecting the null hypothesis when the null is actually true
typically, p<0.05 is real effect
confidence interval (CI)
range of values within which a population parameter is estimated to lie, at a specified probability (eg 95% CI)
confidence limit
upper/lower boundary of a CI
correlation statistics
- indicate direction and magnitude of relationship between 2 variables
- used with ordinal/interval/ratio measures
- can be shown graphically (scatter plot)
- correlation coefficient can be computed
- with multiple variables, a correlation matrix can be displayed
bivariate correlation
2 variables
- Pearson’s r, a parametric test (lowercase “r” indicates a correlation b/w 2 variables)
- tests that the relationship b/w 2 variables is not zero
- used when measures are on an interval/ratio scale (continuous level data)
strengths of relationships
weak: 0.00~0.30 (+ or -)
moderate: 0.30~0.50 (+ or -)
strong: >0.50 (+ or -)
nonparametric alternatives to bivariate correlation analysis
- Spearman’s rank-order correlation coefficient: measures association b/w ordinal-level variables
- Kendall’s tau: measures association b/w ordinal-level variables
- Cramer’s V: measures association b/w nominal-level variables
factor analysis
- examines interrelationships among large #s of variables to reduce them to a smaller set of variables
- IDs clusters of variables that are most closely linked together
- typically used to assist with validity of a new measurement method or scale
simple linear regression
- provides a means to estimate the value of a dependent (outcome) variable based on the value of an independent variable (predictor)
- outcome variable is continuous (interval/ratio-level data)
- predictor variables are continuous or dichotomous (dummy variables)
- change in Y given a one unit change in X
multiple linear regression
- predicts a dependent variable based on 2+ independent variables (predictor)
- dependent variable is continuous (interval/ratio-level data)
- predictor variables are continuous or dichotomous (dummy variables)
simultaneous multiple regression
enters all predictor variables into regression equation at the same time
stepwise multiple regression
enters predictors in a series of empirically determined steps, in the order that produces the greatest increment to R^2
hierarchical multiple regression
enters predictors into equation in a series of steps, controlled by researcher
multiple correlation coefficient (R)
- is the correlation index for a dependent variable and 2+ independent variables
- does not have negative values: shows strength of relationship, but not direction
- can be squared (R^2) to estimate proportion of variability in dependent variable accounted for by independent variables
- can’t be less than highest bivariate correlation b/w the dependent v and an independent v
power analysis
- method of reducing risk of Type II errors and estimating their occurrence
- if power = .80, risk of Type II error is 20%
- estimates how large a sample is needed to reliably test hypotheses
4 components:
- significance criterion (α)
- sample size (N)
- population effect size (γ): magnitude of relationship b/w research variables
- power: probability of obtaining a significant result (1 - β)
- generally need 20-30 subjects per independent v
- <10 subjects per independent variable leads to serious error
odds
based on probabilities
probably of occurrence/probability of nonoccurrence
odds ratio (OR)
ratio of odds for the treated vs. untreated group, with the odds reflecting the proportion of people with the adverse outcome relative to those without it
dichotomous outcome variable
mean of dependent (outcome) variable will be between 0-1
OR = 1 probability of event is same for both groups
OR >1 probability is higher for subjects exposed
OR <1 probability of event is lower among subjects exposed
(slides have more interpretations)
commonly used parametric stats
student’s t-test
analysis of variance (ANOVA)