15 - Hypothesis Testing, Significance Testing, Confidence Intervals and Power Flashcards
(22 cards)
What is statistical inference?
Using a sample to draw conclusions about a population.
Define a confidence interval.
The range of values that you expect the mean of a population to fall between, given the mean of a sample from that population.
What is the formula for a confidence interval for the population mean?
𝑥̅ ± (𝑧 or 𝑡) (standard error)
What is the standard error formula when using the Normal (z) distribution?
σ / √n
What is the standard error formula when using the student-t (t) distribution?
s / √n (where s is the unbiased estimator of the population standard deviation)
Define degrees of freedom for a student-t distribution.
The number of observations minus 1 (n-1).
What is a point estimate?
A single value that is the best estimate of a population parameter (e.g., the sample mean as an estimate of the population mean).
What is the formula for the unbiased estimator, s, of the population standard deviation?
s² = [σ² * n] / (n - 1)
What are the factors that influence the width of a confidence interval?
Sample size, standard deviation, and confidence level.
How does increasing the sample size affect the width of a confidence interval?
It decreases the width.
How does increasing the confidence level (e.g., from 95% to 99%) affect the width of a confidence interval?
It increases the width.
Define the acceptance region in hypothesis testing, in the context of confidence intervals.
The range of values within the confidence interval, where if the sample mean falls within this range, we fail to reject the null hypothesis.
Define the critical region in hypothesis testing, in the context of confidence intervals.
The values outside the confidence interval, where if the sample mean falls within this region, we reject the null hypothesis.
Define a Type I error.
Rejecting the null hypothesis when it is actually true.
Define a Type II error.
Failing to reject the null hypothesis when it is actually false.
Define the size of a hypothesis test.
P (Type I error) = (Actual) Significance level
Define the power of a hypothesis test.
1 − P (Type II error) = The probability of rejecting the null hypothesis when it is indeed not true.
How do you calculate the power of a hypothesis test?
1 - P (Accept H0 | New parameter)
What is the relationship between Type I and Type II errors?
They are inversely related; decreasing the probability of one often increases the probability of the other.
What is the effect of increasing sample size on the power of a test?
Increasing the sample size increases the power of the test.
What is a p-value?
A probability, ranging from 0 to 1, indicating how likely it is a particular result could be observed if the null hypothesis is true.
What is a critical value?
A preset threshold that decides whether the null hypothesis should be rejected or not.