Exam 2 Lecture 2 Flashcards
True story of being a scientist
Asked to critique a study that looked at a group of people who drink too much.
PROTOCOL: Heart rate measured
- During a relaxation exercise
- When they were showed photos of booze
- As they told a favorite story about drinking
- While they smell their favorite drink, and taste it
- Measured craving after all of this
RESULTS: Heart rate changed during the storytelling
CONCLUSION: Craving causes changes in heart rate measures
I don’t believe you!
- The heart and lungs are linked, when you inhale, your heart accelerates. What does talking do to your breathing?
When you concentrate, your heart rate (&BP) changes
Maybe concentrating on all this changes HR, not craving
We need a way to vet whether a study was done correctly and whether the data are meaningful… whether we should believe what we read/see/hear
Validity testing
When you are testing something that is vague, murky, complicated, multifaceted, squishy, subjective… most things being studied are… that something there needs to be some way to determine if you are actually testing
VALIDITY TESTING
Using a measure that measures what you think you’re measuring
Validity
Does an online IQ test really measure intelligence?
Does the Myers-Briggs test really reflect who you are?
Before you believe a conclusion, you need to believe the process
Designing a study that studies what you think you’re studying
Validity
Studying how well antidepressants work by studying differences in SAT scores between AD users and non-users. NOT VALID
Studying how people respond to social situations using a lab-based experiment where people watch movie clips about awkward social situations while hooked up to physiological sensors
MAYBE
Before you believe a conclusion, you need to believe __________
The process
Validity helps measure what?
The APPROPRIATENESS of your data and data collection protocol.
The validity of a measurement tool (questionnaire, equipment) must be tested. The validity of a study must also be tested.
There are many different types of validity.
Validity is NOT about precision. It’s about RELEVANCE
Is validity about precision or relevance?
Relevance
Validity of a measurement examples
You want to assess stress levels of college students. What data should you collect?
- Blood glucose
- How much deodorant they wear
- Answer to “you stressed rn?”
- Antibiotic use in past week
- A 106-item survey to measure stress
- Salivary cortisol at wake time
Implications to all of these, some don’t even assess stress levels, others are not helpful indicators/may generate too much noise/error compared to others
Validity of a Study protocol
Am I actually answering the question I’m asking?
Ex: Is heart rate (HR) different after a stroke?
Protocol: Collect HR before, during, and after a stroke using an electrocardiogram
- a plethsymogram
- counting carotid pulse
- asking spouse about patient palpitations
Which are valid? They are all valid, but some of these measurement strategies vary in how precise/accurate they are.
Validity (measuring what you think you’re measuring). When is it easier to demonstrate and when is it harder to demonstrate?
- Easier to demonstrate when the study is about tangible/concrete construct (body composition, enzyme activity, heart rate)
- Harder to demonstrate when measuring complex ideas (fatigue and stress), human behavior, and self-reported data (attitudes and feelings)- these days, we measure A LOT of complex things
TESTING and PROVING validity is critical when measuring things that are
Hard to define, hard to measure, and hard to quantify
What are the types of validity?
MEASUREMENT VALIDITY aka CONSTRUCT VALIDITY
and
STUDY VALIDITY
What are the subcategories of MEASUREMENT AKA CONSTRUCT VALIDITY
- Content validity
- Face validity
- Criterion validity (Convergent and discriminant)
What are the subcategories of STUDY VALIDITY
- Internal validity
- External validity
What does MEASUREMENT VALIDITY AKA CONSTRUCT VALIDITY aim to ask
Are YOUR DATA relevant to your measure?
What does STUDY VALIDITY aim to ask
Is your study design relevant to answer the question you are asking?
Measurement Validity aka Construct Validity
What is the difference between constructs and indicators?
Construct= A concept or idea you want to measure
Sometimes it isn’t directly measurable, so you get a variety of relevant (but not exact) data called indicators/items
There are a variety of strategies to determine whether the indicators/items (aka- the data you actually collected) really capture the construct (aka the big idea that you were trying to measure/measurement goals)
Construct vs. Indicators (Alcohol Example)
Construct= Alcohol misuse
Indicator= Frequency of use (days per week)
Indicator= Quantity used (drinks per occasion)
Indicator= Hangover severity (symptoms checklist)
Indicator= Consequences (negative outcomes checklist)
All of the indicators are used to determine the construct.
Construct vs. Indicators (Exercise Difficulty Example)
Indicator= Knowledge (breath control)
Indicator= Experience (fitness history)
Indicator= Preparedness (good sneakers)
Indicator= Motivation (enthusiasm to try)
Most psychological states require a bunch of questions to get at. This 21-item survey measures 3 psychological states. Explain.
21-item survey= 21 indicators
3 psychological states= 3 constructs
To have construct validity, what three things must you test?
- Face validity
- Content validity
- Criterion validity
What is construct validity?
The way you are trying to answer your question makes sense based on what is currently known
Do the tools you are using to answer your question capture all relevant, current knowledge of the construct?
Constructs change definitions over time as knowledge expands
- Optimal nutrition (“good fat” etc)
- Autism is a spectrum (very complicated history)
Constructs require differentiation from similar constructs
- Anxiety versus depression
- Fatigue from overexertion, depression, sleep disorders, thyroid conditions
Face Validity- a component of Construct Validity
Face validity= yep. makes sense
- On the surface, your measure seems appropriate for what you are trying to measure
- On the surface, it seems relevant and related
This is a more subjective measure of validity.
- This type of validity might vary depending on who is participating in the study and what the overall study design is
- This is one place where bias can sneak in
- Generational differences
- Racial/ethnic/cultural differences