Statistical Significance Flashcards

1
Q

What is Chance/Random Error?

A

Error in measurement caused by factors which vary from one measurement to another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the Differences in Measurement Due to?

A

Random Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the 2 Types of Random Error?

A

Sampling Error and Measurement Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Types of Random Error: What is Sampling Error?

A

These are differences between the samples used in each study. Not everybody is the same, and there may be differences in the way people respond as well as visible characteristics (age, etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Types of Random Error: What is Measurement Error?

A

Measuring the same parameter multiple times will not give the same result every time (e.g. weight will be slightly different each day)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What Factors Influence Random Error?

A

Random error is influenced by the degree of variability (heterogeneity) between individuals and the sample size (the fewer people we look at, the less certain we are as to whether the observations we made are actually true)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Statistical Significance?

A

Measures how likely any apparent differences in outcome between treatment and control groups are real and not due to random error/chance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a Confidence Interval?

A

A confidence interval is the range in which you are confident that the true value lies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What Percentage CI do we use?

A

95% CIs are normally used, indicating we are 95% sure that the true value lies in the specified range. In other words, if we were to repeat the study 100 times, 95 times out of those 100 the value will lie in that range

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is better, Narrow or Wide CI?

A

Narrow CIs. They give more precise estimate for true effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can CIs be narrowed?

A

Increasing sample size

Decreasing heterogeneity (e.g. give just to men, give drug just to age of X-Y)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

For a treatment to have a real difference (vs. its comparator), what would we like to be sure?

A

For a treatment to have a real difference (vs. its comparator), we would like to be sure (95% confident) that the true RR does not cross the ‘line of no effect’ (RR, OR or HR = 1) (ARR, RRR = 0)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the Null Hypothesis?

A

In hypothesis testing, we make a statement called the null hypothesis, which we are trying to reject. The null hypothesis is usually the statement that there is no difference between the new and current treatments, so therefore we want to reject that to show that our new drug works. The study should be designed to find evidence that it is highly unlikely that there is no difference.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is P-Value?

A

The p-value is essentially a level of confidence. If the calculated p-value is less than 0.05 (5%), then we are 95% sure that differences between treatments are real and not down to random error. There is still a 5% chance that the difference that it is due to chance only

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

If the p-value is >0.05, what does this indicate?

A

Indicates the result is not statistically significant. We are not confident that the differences are real, meaning there is a higher chance that any differences are due to random error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How can P Values be Reduced?

A

Highly efficacious treatments, large sample sizes or low heterogeneity will reduce p-values and enhance statistical significance

17
Q

Is it possible for a result to be statistically significant but not clinically significant?

A

It is possible for a result to be statistically significant and for the differences to be real, but not clinically significant, meaning these differences don’t really matter in the real world and don’t justify a change in clinical practice

18
Q

For a result to be clinically significant, what must it do?

A

For a result to be clinically significant, it must be greater than the minimal clinically significant difference

19
Q

When is a result clinically and statistically significant?

A

This occurs when the CI does not cross 1 AND the CI exceeds the minimal clinically important difference (i.e. does not cross this line)

20
Q

Why is a result statistically significant but not clinically significant?

A

Usually due to large sample size

21
Q

Why is a result clinically significant but not statistically significant?

A

Usually due to small sample size

22
Q

What are Power Calculations?

A

Inform the sample size required for a trial

Powering studies appropriately (i.e. picking the right size) ensures the results are significant but not too costly or unethical

23
Q

Why don’t we want a very small sample size?

A

Results may not be statistically significant

24
Q

Why don’t we want a very small large size?

A

Would not be ethical and would be expensive

25
Q

What is a Superiority Trial?

A

Attempt to show that one treatment is better than another

26
Q

What is a Non-Inferiority Trial?

A

No worse than - want to show drug is working just as well as existing treatment

27
Q

What is an Equivalence Trial?

A

Not worse than, but not better than either