Statistics: Inferential Stats Concepts and Terms Flashcards
(47 cards)
Inferential Statistics: overview
Descriptive stats = summarize data
Inferential Stats = make inferences about a population based on sample drawn from a population
Central Limit Theorem
Distribution approaches a normal curve as sample size increases
The mean of the sampling distribution = pop mean
SD of distribution = Standard Error of the Mean
Type I Error (α)
Rejection of a true null hypothesis
Research erroneously shows significant effects
Type II Error (β)
Retain a false null hypothesis
Research misses actual significant effects
Power (1-β)
Likelihood of rejecting false null hypothesis
Parametric v Nonparametric Tests:
Measurement Scales
Parametric Tests: Interval or Ratio Scales
Non-Parametric Tests: Nominal or Ordinal Scales
Parametric v Nonparametric Tests:
Commonalities and Differences
Both assume random selection and independent observations
Parametric tests (e.g. t-test, ANOVA) evaluate hypotheses about population means, variances, or other parameters.
Parametric Tests:
Assumptions
Normal Distribution
Homoscedasticity
Homoscedasticity
Assumption that variances of populations that groups represent are relatively equal
[For studies with more than one group]
One-way ANOVA vs Factorial ANOVA vs MANOVA
One-way ANOVA: ONE IV, ONE DV
Factorial ANOVA, two-way = 2 IV’s, three-way = 3 IVs
MANOVA: used whenever there is more than one dv
(MULTIvariate analysis)
Effect Size:
What is it?
Name two types
Measure of the practical or clinical significance of statistically significant results
Cohen’s d
Eta squared (η²)
Cohen’s d
Effect size in terms of SD (d = 1.0 = 1SD change)
Small effect size = 0.2
medium effect size = 0.5
large effect size = 0.8
Eta squared (η²)
Effect size in terms of variance accounted for by treatment
*Variance = σ², so think squared greek letter = variance
Bivariate correlation assumptions
Linearity
Unrestricted range of scores on both variables
Homoscedasticity
Bivariate correlation “language” (X, Y)
X = predictor variable
Y = criterion variable
Simple Regression Analysis
Allows predictions to be made with:
One predictor (X) One criterion (Y)
F ratio calculation
MSB/MSW
Mean square between divided by mean square within
F ratio range
F is always greater than +1
Larger F ratio = increased likelihood of stat significance
Statistical Power definition
Degree to which a statistical test is likely to reject a false null hypothesis (1-β)
Reject false null = show statistical significance
Ways to Increase Statistical Power
Increase alpha from .01 to .05
Increase sample size
Increase the effects of the IV
Minimize error
Use one-tailed test when appropriate
Use parametric test
Effects of increasing alpha from .01 to .05
Greater likelihood of rejecting null hypothesis
*Greater likelihood Type I error
Effects of decreasing alpha from .05 to .01
Decreased statistical power
However, increased confidence that statistically significant results are correct
Nonparametric tests and data distribution
Nonparametric only evaluates hypotheses about Shape of distribution
NOT distribution’s mean, variance, or other parameter
Two factors that determine critical value for statistical significance
alpha (e.g. .05)
degrees of freedom