Mid-Tri Flashcards
(30 cards)
List the scientific method
- theory
- hypothesis
- measurement
- statistic
- inference
Describe reliability and validity
reliability = the consistency of repeatability of measures validity = are we measuring what we are trying to measure
List ways to test reliability
Inter-rater or Inter-observer reliability (used to assess the difference between raters/observers giving consistent estimates of the same phenomenon)
Test-retest reliability
(Used to assess the consistency of a measure from one time to another)
Parallel-forms reliability
(Used to assess the consistency of the results of two tests constructed in the same way from the same content domain)
Internal Consistency Reliability
(Used to assess the consistency of results across items within a test)
What does reliability mean for validity
Reliability puts a ceiling on validity (a score can only be as valid as it is reliable)
What is a construct
A construct refers to a behaviour or process that we are interested in studying
Examples of manipulations
Instructional (what you tell participants)
Environmental (stage an event)
Stooges (Use fake participants)
Convergent validity
Do scores on the measure correlate with scores on other similar measures related to the construct.
Divergent validity
Do scores on the measure have low correlations with scores on other different measures that are unrelated to the construct
Face validity and content validity
On its face value does the measure seem to be a good translation of the construct.
Does the measure assess the entire range of characteristics that are representative of the construct it is intending to measure?
External vs internal validity
External validity is the extent to which the results can be generalised to other relevant populations, settings or times.
Internal validity is the ability to draw conclusions about causal relationships from the results of a study.
Threats to internal validity
Selection bias, maturation, statistical regression, mortality - attrition, history, testing, instrumentation, effects of studying people, demand effects, placebo effects, experimenter bias
What is the t-statistic
Comparing two means (between groups - when there are two experimental conditions and different participants were assigned to each condition) (Repeated measures - when there are two experimental conditions and the same participants took part in both)
What t-statistic is the homogeneity of variance assumed
independent samples t test
What is a one-way ANOVA
Comparing three or more means
What is a type 1 error
When you reject the null but it is true (Seeing an effect when there isn’t one)
What is a type 2 error
Accepting the null when there is an effect (two blind to see the effect)
What is the f-statistic and what does it tell us
The f-statistic represents the ratio of the model to its error.
It tells us that there is a difference somewhere between the groups but not where the difference lies.
What are the types of multiple comparisons
Planned comparisons (contrasts) which are done prior to the data collection and test specific hypotheses.
Post-hoc analyses which compare all groups using stricter alpha values which reduces type 1 error.
What is an orthogonal contrast and a non-orthogonal contrast
An orthogonal contrast compares unique “chunks” of variance
A non-orthogonal contrast overlap or use the same chunks of variance in multiple comparisons. Have increased type 1 error
When are polynomial contrasts used
When the IV is ordinal
What is statistical power
Refers to the probability that we will find an effect if there is one to be found.
What is effect sizes
They are the standardised units of the magnitude of an experimental effect.
What is Alpha
Alpha is the probability that we will reject the null hypothesis when we shouldn’t (type 1 error)
What is power
The probability of a correct decision of rejecting the null hypothesis when it is false