Unit 2: Chapter 5-Variables in Research Flashcards
(20 cards)
Distinguish between independent variables and dependent variables. Give examples.
dependent variables are meant to alter when the independent variable is altered
Why is it not accurate to say that the independent variable is the cause and the dependent variable is the effect? Provide an example.
Sometimes we have trouble deciding which of two variables in a study is the cause and which is the effect.
(video games cause violent dispositions or do violent dispositions cause kids to play video games)
What are the levels of an independent variable?
A level is a different value of an independent variable such as placebo or non-placebo
What is the difference between continuous and discreet variables?
-Discreet variables cannot have intermediate measurements like a person cannot commit 1.3 crimes.
-A continuous variable can have intermediate measurements such as someone lifting 101.675 pounds.
Define: measurement
the process of assigning numbers to events or objects according to rules
Is a person with an IQ of 120 “twice” as smart as someone with an IQ of 60? Why or why not?
No, because it is measured by an expectation of averages based on age. Also “100 IQ” acts as a zero point, both these scores are at different degrees of seperation from the 100 mark.
Provide an alternate example from the textbook that explains the difference between reliability and validity. Give an example that is valid but nor reliable and the reverse.
The sun rises because I wake up in the morning before it rises.
Reliable if this occurs always, but not valid.
Define reliability of a test measure. What are the types of reliability of measures? Give an example of each.
Test-retest reliability and internal consistency reliability
1-take the same test and get the same results
2-split the test questions in half and see if the person scores evenly on both tests (ie see if they are measuring the same thing)
List each type of validity in relation to tests and measurement and provide an example of each (4)
(1) construct validity,
(2) face validity,
(3) content validity
(4) criterion validity
Define construct validity
a test that the mea- surements actually mea- sure the constructs they are designed to measure, but no others (131)
*it can readily make predictions about what it tests!
Define face validity
idea that a test should appear superficially to test what it is supposed to test (131)
ie can the layman see the validity
Define content validity
idea that a test should sample the range of behavior represented by the theoretical concept being tested (132)
ie it measures all that is chiefly relevant and not just one aspect of a thing (intelligence tests)
Define criterion validity
idea that a test should correlate with other measures of the same theoretical construct (132)
ie an intelligence test will measure things people and research commonly agree requires intelligence
Using a bathroom scale as an example, explain how a measurement can be both reliable and invalid.
The scale can be reliably inaccurate. Say, the scale always under-shot the true weight of something by 2lbs. It is reliable but not true/valid to the cause.
Distinguish between real and apparent limits. Why are they relevant to independent variables?
Real limits are the rounded number to the next interval. A person may be 5’87645 tall but we would say 5’8 whereas the apparent limit is the specific number
Define variable of interest
a variable of interest is when it is unclear which variable is the cause and which is the effect
Define error variance
variability in the depen- dent variable that is not associated with the inde- pendent variable
What is the difference between concurrent and predicted validity?
Concurrent-measures what is
predicted-measures what will be
Define random error
This variability could be called random error (or error variance) because it is not associated with any known independent variable. (132)
ie slight unpredictable errors in the measuring tool
Define systematic or constant error. When is it less of a problem and when is it more? Provide examples of each.
-measurement error that is associated with consistent bias
-It is less problematic if the error impacts the variables consistently and evenly
-It is more problematic when it confounds the independent variable