Error & Control Flashcards
What are the 2 categories of measurement error?
- Random errors obscure the results
- Constant or systematic error, bias the results - much worse
What are Extraneous Variable?
Undesirable variables that add error to our experiments (measurement of the DV)
How are Extraneous Variables controlled?
Random allocation/ counterbalancing - adds error variance
Results in an even addition of error variance across levels of the IV.
What are Confounding Variables?
Disproportionately affect one level of the IV more than other levels.
- Add constant/systematic error at the level of the IV.
Confounding variables introduce a threat to internal validity experiments.
Threats to Internal Validity. (sources of confounding variables)
- Selection
- History
- Maturation
- Instrumentation
- Reactivity
How is Selection a Threat to Internal Validity?
Bias resulting from the selection or assignment of participants to different
levels of the IV.
Random assignment - solves problem
How is History to a threat to Internal Validity?
Uncontrolled events that take place between testing occasions.
How is Maturation a threat to Internal Validity?
Intrinsic changes in characteristics of participants between different test occasions, in repeated measures design.
Counterbalancing order of experiments - solves problem.
How is Instrumentation a threat to Internal Validity?
Changes in the sensitivity or reliability of measurement instruments during the course of the study
How is Reactivity a threat to Internal Validity?
Ps awareness that they are being observed may alter behaviour
Can threaten internal validity if Ps are more influenced by reactivity at one level of the IV than the other.
Counteracted by Blind Procedures.
Define Subject Related - Demand Characteristics.
Ps might behave in the way they think the researcher wants them to behave.
Define Experimenter related - Experimenter bias.
Experimenter can affect outcomes due to their own bias.
4 forms of reliability.
- Test-retest reliability
- Test-rater reliability
- Parallel forms reliability
- Internal consistency
> Split-Half Reliability
Define Test-retest Method of reliability
It assesses the external consistency of a test.
Measures fluctuations from one time to another.
Important for constructs which we expect to be stable (e.g. personality type)/ similar over time points.
Define Inter-rater (test-rater) reliability
It assesses the external consistency of a test.
Measures fluctuations between observers (the degree to which different raters give consistent estimates of the same behaviour)
Important when results are measured by each experimenters objectivity.
Define External Reliability (Parallel forms reliability)
The extent to which a measure varies from one use to another.
Define Internal Reliability
Extent to which a measure is consistent with itself
What is the Spilt-Half Method of Reliability.
It assesses the internal consistency of a test.
Then measures the extent to which all parts of the test contribute equally to what is being measured.
Define Content Validity.
Is the content appropriate.
Define Face Validity
Based on subjective judgement. Does the test relate to the underlying theoretical concepts?
Define Construct Validity.
Does the test relate to underlying theoretical concepts? Is the construct we are trying to measure valid?
The validity of a construct is supported by cumulative research evidence collected over time.
Define Convergent validity.
Convergent validity: correlates with tests of the same and related constructs.
Define Discriminant validity.
Discriminant validity: doesn’t correlate with tests of different or unrelated constructs.
Define Internal Validity.
The extent to which the manipulation of our IV caused the change to our DV.