Lecture 2 Flashcards

Hypothesis testing and its implications

1
Q

What parameters define a normal distribution?
1. Standard deviation and z test
2. Median and Mean
3. Mean and standard deviation
4. Mean and Z test

A

Mean and standard deviation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What’s the shape of a normal distribution?
1. Linear
2. Bell
3. Horizontal
4. Upside bell

A

Bell

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the z score reflect?
1. The probability of an event occurring in a normal distribution.
2. The variability of a data point from the mean in standard deviation units.
3. The frequency of a particular value in a dataset.
4. The range of values in a sample.

A

The variability of a data point from the mean in standard deviation units.

the number of standard deviations above or below the mean a particular score is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does the standard error of the mean (SEM) represent?
1. The average value in the sample.
2. The variability of individual data points.
3. The precision of the sample mean estimate.
4. The range of values in the population.

A

The precision of the sample mean estimate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

If the standard error of the mean (SEM) is large, what does it suggest about the sample?
1. The sample mean is likely accurate.
2. The sample size is small.
3. The sample is highly variable.
4. The standard deviation is small.

A

The sample is highly variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How is the standard error of the mean (SEM) affected by an increase in sample size?

  1. Increases
  2. Decreases
  3. Remains constant
  4. Becomes negative
A

Decreases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

If two samples have the same standard deviation but different sample sizes, how does their standard error of the mean (SEM) compare?

  1. The one with the larger sample size has a smaller SEM.
  2. The one with the smaller sample size has a smaller SEM.
  3. Both have the same SEM.
  4. The SEM is unrelated to sample size.
A

The one with the larger sample size has a smaller SEM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

When constructing a 99% confidence interval for the mean, how would the width of the interval change compared to a 95% confidence interval?

  1. The 99% interval will be wider.
  2. The 99% interval will be narrower.
  3. The widths will be the same.
  4. It depends on the sample size.
A

The 99% interval will be wider.

A higher confidence level requires a wider interval to capture more extreme values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

If the standard deviation of a sample increases, what happens to the width of the 95% confidence interval for the mean?

  1. The interval becomes narrower.
  2. The interval becomes wider.
  3. The interval remains unchanged.
  4. The width depends on the sample size.
A

The interval becomes wider.

A larger standard deviation increases the uncertainty, leading to a wider confidence interval.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How does a smaller sample size affect the width of a confidence interval?

  1. The interval becomes wider.
  2. The interval becomes narrower.
  3. The width remains the same.
  4. It depends on the confidence level.
A

The interval becomes wider.

Smaller sample sizes lead to less precision, resulting in wider confidence intervals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have two 95% confidence intervals, one for the mean of Sample A and one for the mean of Sample B. If the intervals do not overlap, what can you conclude?

  1. The means of Sample A and Sample B are significantly different.
  2. The sample sizes for A and B are different.
  3. Both samples come from the same population.
  4. The confidence level is too low.
A

The means of Sample A and Sample B are significantly different.

If the intervals do not overlap, it means that the range of values for one sample’s mean does not include the mean of the other sample, and vice versa.

This suggests that the means are likely to be significantly different, as the ranges of values do not overlap. In statistical terms, there is evidence to reject the null hypothesis that the population means are equal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A researcher is interested in estimating the average height of two different groups of plants, Group X and Group Y. After collecting the necessary data, the researcher constructs 95% confidence intervals for the average heights. The confidence interval for Group X is wider than the confidence interval for Group Y. What does this difference in width suggest about the precision of the height estimates?

  1. The height estimates for both groups are equally precise.
  2. The height estimate for Group X is more precise than for Group Y.
  3. The height estimate for Group Y is more precise than for Group X.
  4. The precision of height estimates cannot be determined without additional information.
A

The height estimate for Group Y is more precise than for Group X.

The width of a confidence interval is inversely related to precision. A narrower interval indicates greater precision, while a wider interval suggests lower precision. In this case, since the confidence interval for Group Y is narrower than that for Group X, it implies that the height estimate for Group Y is more precise than for Group X. This question tests the understanding that the width of the confidence interval reflects the degree of precision in the estimate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A researcher is investigating the average scores of two groups of students on a challenging exam. After collecting data, the researcher constructs 95% confidence intervals for both groups. If the researcher wants to increase the precision of the confidence intervals without changing the confidence level, what strategy could be employed?

  1. Increase the sample size for both groups.
  2. Choose a lower confidence level.
  3. Use a different formula for calculating the margin of error.
  4. Select a smaller critical value from the standard normal distribution.
A

Increase the sample size for both groups.

By increasing the sample size (n), the standard error in the formula for the margin of error decreases. As a result, the margin of error becomes smaller, leading to a more precise confidence interval. This strategy is commonly used to improve the precision of estimates without changing the confidence level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

In Null Hypothesis Significance Testing, if the p-value is extremely small (close to 0), what conclusion can be drawn regarding the fit of the model to the data?

  1. The model does not fit the data well, and the null hypothesis is rejected.
  2. The model fits the data well, and the null hypothesis is accepted.
  3. The model fits the data well, and the alternative hypothesis is rejected.
  4. The model does not fit the data well, and the alternative hypothesis is rejected.
A

The model does not fit the data well, and the null hypothesis is rejected.

A very small p-value suggests that the observed data is unlikely under the assumption that the null hypothesis is true, leading to the rejection of the null hypothesis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

If a researcher selects a significance level of 0.01, what does this mean in the context of hypothesis testing?

  1. There is a 1% chance of Type I error.
  2. There is a 1% chance of Type II error.
  3. The probability of obtaining a significant result is 0.01.
  4. The null hypothesis will be accepted 99% of the time.
A

There is a 1% chance of Type I error.

The significance level (0.01) represents the probability of making a Type I error, rejecting the null hypothesis when it is true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what does it mean if the probability value (p-value) is greater than the chosen significance level?

  1. The null hypothesis is rejected.
  2. The null hypothesis is not rejected.
  3. The alternative hypothesis is accepted.
  4. The alternative hypothesis is rejected.
A

The null hypothesis is not rejected.

If the p-value is greater than the significance level, there is not enough evidence to reject the null hypothesis.

17
Q

What mistake are we making if we believe there is a statistically significant effect, but in reality, there isn’t?

  1. Type I error
  2. Type II error
  3. Fisher’s error
  4. Variance illusion error
A

Type I error

Type I error occurs when we mistakenly believe there is a significant effect (reject the null hypothesis) when, in reality, there isn’t.

18
Q

According to Fisher’s criterion, what is the “acceptable” level of Type II error?

  1. p < 0.05
  2. p = 0.2
  3. α-level = 0.05
  4. β-level = 0.2
A

p = 0.2
OR
β-level = 0.2

Type II error occurs when we believe there is no effect in the population when, in reality, there is. According to Fisher’s criterion, the acceptable level of Type II error is often set at p = 0.2 (β-level = 0.2).

19
Q

What distinguishes Bayesian estimation from frequentist statistics?
1. Bayesian estimation provides point estimates, while frequentist statistics offer probability distributions.
2. Bayesian estimation incorporates prior knowledge and beliefs, unlike frequentist statistics.
3. Bayesian estimation relies on p-values for hypothesis testing, whereas frequentist statistics use posterior probabilities.
4. Bayesian estimation exclusively deals with meta-analysis, while frequentist statistics focus on effect sizes.

A

Bayesian estimation incorporates prior knowledge and beliefs, unlike frequentist statistics.

Bayesian estimation involves updating prior beliefs with observed data, allowing researchers to incorporate existing knowledge into their analyses.

20
Q

One of the key benefits of incorporating meta-analysis, as highlighted by EMBeRS, is:

  1. Reducing the need for registration in research studies.
  2. Enhancing the reliability of overall effect size estimates.
  3. Focusing exclusively on p-values in hypothesis testing.
  4. Increasing the likelihood of Type I errors.
A

Enhancing the reliability of overall effect size estimates.

Meta-analysis combines results from multiple studies, providing a more robust and reliable estimate of the overall effect size.

21
Q

If the power of a statistical test is 0.90, what does this indicate?
1. There is a 90% chance of making a Type I error.
2. There is a 10% chance of making a Type II error.
3. The test has a 90% chance of detecting a true effect if it exists.
4. The p-value is 0.90.

A

The test has a 90% chance of detecting a true effect if it exists.

Power is the probability of correctly detecting a true effect.

22
Q

How does increasing the sample size affect the power of a statistical test?
1. Increases power.
2. Decreases power.
3. Has no effect on power.
4. Makes the test more sensitive to Type I errors.

A

Increases power.

A larger sample size generally increases the power of a statistical test.

23
Q

In the context of hypothesis testing, what is the primary purpose of the alpha-level?

  1. To control for Type II errors.
  2. To determine effect size.
  3. To set the threshold for statistical significance.
  4. To calculate power.
A

To set the threshold for statistical significance.

The alpha-level determines when we consider an effect statistically significant.

24
Q

What role does effect size play in the power of a statistical test?

  1. Larger effect sizes decrease power.
  2. Smaller effect sizes increase power.
  3. Effect size has no impact on power.
  4. Larger effect sizes increase power.
A

Larger effect sizes increase power.

A larger effect size makes it easier to detect a true effect, thus increasing power.

25
Q

In the context of hypothesis testing, what does the power of a statistical test depend on the most?

  1. Sample size (N)
  2. Alpha-level (α)
  3. Beta-level (β)
  4. Type I error
A

Sample size (N)

Power is heavily influenced by the sample size. A larger sample size increases the chances of detecting a true effect.

26
Q

Why is effect size an important factor in determining the power of a statistical test?

  1. It directly affects the sample size
  2. It influences the likelihood of Type II errors
  3. It determines the level of statistical significance
  4. It is unrelated to statistical power
A

It influences the likelihood of Type II errors

A larger effect size increases power by making true effects more noticeable.

27
Q

If a study reports a narrow confidence interval for a treatment effect and the associated p-value is less than 0.05, what can be inferred about the power of the hypothesis test?

  1. Power is high.
  2. Power is low.
  3. Power and confidence intervals are unrelated.
  4. The study has a Type II error.
A

Power is high.

A narrow confidence interval and a significant p-value suggest precise estimation and a likely high power of the hypothesis test. A significant p-value indicates that the null hypothesis is rejected, which is associated with higher power.

28
Q

If two groups have confidence intervals for their means that do not overlap, what can be reasonably inferred about the power of the statistical test?

  1. The test has high power.
  2. The test has low power.
  3. Power cannot be determined from confidence intervals.
  4. The test has a wide margin of error.
A

The test has high power.

Non-overlapping confidence intervals suggest a significant difference, indicating high power.

29
Q

A researcher calculates a 95% confidence interval for the difference in means between two groups. If the confidence interval is very wide and includes zero, what does this suggest about the power of the statistical test?

  1. Power is high.
  2. Power is low.
  3. No relationship with power.
  4. Power cannot be determined from the information given.
A

Power is low.

A wide confidence interval, including zero, suggests lower precision in estimating the true difference, indicating lower power in the statistical test.

30
Q

In a hypothetical study with a small sample size, the researcher calculates a narrow 99% confidence interval for a population mean, but the statistical test fails to reject the null hypothesis. What complex interplay of factors might explain this seemingly paradoxical result?

  1. The study has high power, but the effect size is small.
  2. The study has low power, and the effect size is large.
  3. The sample size is too small to yield precise estimates.
  4. The alpha-level is set too leniently, compromising the test’s ability to detect true effects
A

The study has high power, but the effect size is small.

The narrow confidence interval indicates high precision, suggesting high power. However, the failure to reject the null hypothesis implies that the observed effect size, though precise, might be too small to reach statistical significance.

31
Q

What does a p-value less than 0.05 typically indicate in hypothesis testing?
1. The null hypothesis is definitely false.
2. The probability of replication in a future experiment.
3. The results are due to random chance.
4. The results are surprising if the null hypothesis is true.

A

The results are surprising if the null hypothesis is true.

A p-value less than 0.05 suggests that the observed results are surprising if the null hypothesis is true, leading to a rejection of the null hypothesis.

32
Q

What is the main purpose of a p-value in hypothesis testing?
1. To determine the likelihood of the null hypothesis being true.
2. To quantify the effect size in the population.
3. To assess the probability of obtaining observed results if the null hypothesis is true.
4. To confirm the alternative hypothesis.

A

To assess the probability of obtaining observed results if the null hypothesis is true

33
Q

What does it mean when researchers say they “fail to reject” the null hypothesis?
1. The null hypothesis is proven true.
2. The null hypothesis is rejected.
3. The results are not surprising.
4. The alternative hypothesis is accepted.

A

The results are not surprising.

“Failing to reject” the null hypothesis means the observed results are not surprising or not statistically significant.

34
Q

If a study has a very small p-value, what conclusion can be drawn about the strength of the evidence against the null hypothesis?

  1. Strong evidence against the null hypothesis.
  2. Weak evidence against the null hypothesis.
  3. No evidence against the null hypothesis.
  4. The null hypothesis is proven true.
A

Strong evidence against the null hypothesis.

A small p-value suggests strong evidence against the null hypothesis, supporting the rejection of the null hypothesis.

35
Q

When researchers set the significance level (alpha) to 0.01 instead of 0.05, how does this affect the interpretation of p-values?

  1. P-values become less relevant.
  2. P-values become more conservative.
  3. P-values become more liberal.
  4. P-values are not affected by the significance level.
A

P-values become more conservative.

When the significance level (alpha) is set to a lower value (e.g., 0.01), it makes the criteria for declaring statistical significance more stringent or conservative.

36
Q

What is the purpose of a placebo group in a clinical trial when interpreting p-values?

  1. To ensure statistical significance.
  2. To control for participant characteristics.
  3. To demonstrate the power of the alternative hypothesis.
  4. To differentiate between the null and alternative hypotheses.
A

To control for participant characteristics.

A placebo group helps control for potential confounding variables and ensures that any observed effects can be attributed to the treatment.

37
Q

In a hypothesis test, if the p-value is 0.08, and the researcher decides to reject the null hypothesis, what type of error might occur?

  1. Type I error.
  2. Type II error.
  3. No error; the decision is correct.
  4. Sampling error.
A

Type II error.

If the researcher rejects the null hypothesis when it is actually true (p-value is 0.08 but the decision is to reject), it corresponds to a Type II error.

38
Q

When conducting a hypothesis test, what does it mean if the p-value is exactly equal to the chosen significance level (alpha)?

  1. The null hypothesis is rejected.
  2. The results are highly significant.
  3. The observed results are at the border of statistical significance.
  4. The alternative hypothesis is proven true.
A

The observed results are at the border of statistical significance.

it suggests that the observed results are right on the border of what is considered statistically significant or not.

39
Q

In a research study, the p-value is 0.001. If the researcher decides not to reject the null hypothesis, what could be a plausible reason for this decision?

  1. Lack of statistical power.
  2. The study lacks external validity.
  3. The results are not surprising.
  4. The alpha level was set too low.
A

Lack of statistical power.

“statistical power” refers to the ability of a study to detect a true effect when it actually exists. If the study lacks statistical power, it may fail to detect significant effects even if they are present in the population.