Data Analysis and Presentation Flashcards
(16 cards)
Standard Deviation
- Estimates the variability of the population from which the sample was drawn.
- Shows how widely scattered some measurements
Standard Error:
- Indicates how close the sample mean is to the population mean. People use it because it is smaller – especially when you put it on the graph looks tidier than SD graph.
- Not a good measure of variability because it is influenced by sample size.
How are sample error and sample size related
As size increases Standard error decreases.
Confidence Interval
- The certainty that a range (interval) of values contains the true, accurate value of a population that would be obtained if the experiment were repeated, from the same population.
- Represents the level of confidence a researcher may have that the true value in a sample is contained within the interval.
What information does a CI give?
Magnitude
Direction of an effect
Range of uncertainty
Clinical value
Effect size:
- Is the magnitude or amount of change between groups
- Main finding of a quantitative study
- In the abstract and results section
- Not affected by the sample size (like the SD)
What is the difference between the effect size and the p-value
- Effect size shows amount or magnitude of a change The p value will tell you whether is a statistically significant difference.
What is power?
The probability that your study will find a statistically significant difference between interventions when an actual difference does exist (True Positive – High Sensitivity, False negative = Type II error)
How can you increase power?
- Bigger sample size
- Use an intervention that has bigger effect
- Use gold standard measurements
When and why is an ANOVA used?
- Used when more than 2 groups are being compared
- Works with categories
- Expected to see variances in samples
- Is sensitive to outliers
What should be done after an ANOVA detects change
- a Tukey’s multiple comrparisons test or post hoc to find where the differences are.
What is a disadvantage of multiple tests?
- it can increase the chance of a significant difference.
- The less times you repeat the test better to wait a while before you do it again to see the extent of change if any.
Whys should p-values be considered along with effect size, sample size, and study design.
Because p-values:
- do not provide clinical insight into important variables such as treatment effect size, magnitude of change, or direction of the outcome.
- is influenced by factors such as the number and variability of subjects, as well as the magnitude of effect
Efficacy:
The benefit of an intervention compared to control or standard treatment under ideal conditions, including compliant subjects only.
Describe how power is related to sample size and effect size
It determines the number of subjects needed in a study to detect a statistically significant difference with an appropriate effect size.
Why are CI’s appropriate for reporting results of clinical trials:
Because they focus on confidence of an outcome occurring, rather than accepting or rejecting a hypothesis