Quantitative Data Collection Flashcards
(46 cards)
Define: Operationalization
Process of translating the concepts of interest to a researcher into observable and measurable phenomena
Why is it important to study how data is collected?
- The success of a study depends on the quality of the data-collection methods chosen and employed
- Data-collection method must be appropriate to the problem, hypothesis, setting and population
When collecting quantitative data, there has to be a goodness of fit between:
- Purpose
- Design
- Research question(s) or hypotheses
- Conceptual and operational definitions
- Data collection method
What is data consistency?
- In data collection, means that the method used to collect data from each participant in the study is exactly the same or as close to the same as possible
- Minimize bias when more than one researcher gathers data
- Control of extraneous variables
- Follow data collection protocols to ensure intervention fidelity
- Ensures interrater reliatbility
Define: Intervention Fidelity
- A way of ensuring consistency in data collection
- Researchers must train data collectors in the methods to be used in the study so that each data collector acquires the information in the same way (e.g. training research assistants)
- Can include protocols or manuals for gathering data systematically and reliably
How are some ways researchers can implement intervention fidelity?
- Structured and rigorous training of staff
- Role playing to evaluate competency
- Checks periodically throughout study
- Regular meetings to review protocol and address complex situations
- Checklists
Define: Fidelity
Faithfulness, loyalty
Define: Interrater Reliability
- The consistency of observations between 2+ observers
- Often the % of agreement among observers
- Reflected as a coefficient kappa (statistically term)
- E.g. when Gabe had to choose pictures of older people, passed it out to be evaluated, and would be ranked as ~85% of people thought this photo had a young adult
What are the common methods of Data Collection?
1) Physiological measurements
2) Observational methods
3) Interviews
4) Questionnaires
5) Records or available data
What is physiological measurements?
- Data nurses gather about patients every day (e.g. VS)
- Allows for objectivity, precision and sensitivity
What are observational methods?
- Used to see how participants behave under specific conditions (e.g. children’s response to pain)
- Requires the study’s observations to be consistent; with a systematic plan; checked and controlled; and related to scientific concepts and theories
What is reliability as it relates to evaluating measurement tools?
The consistency with which the instrument measures the concept of interest
What are the three aspects of reliability?
1) Stability (test/re-test reliability)
2) Homogeneity/internal consistency
3) Equivalence/Interrator Reliability (Cohen’s Kappa) (want 80%+)
What is a stability test?
- Ability of an instrument to produce the same results with repeated testing
- Same test administered again within a given intervals and you compared the results (should be the same)
- give the same questionnaire more than once
What is homogeneity/internal consistency?
- Homo = same
- All of the items in a tool measure the same concept or characteristic
- Chronbach’s alpha of 0.80+ (tells us it is reliable)
Define: Validity
- The extent to which an instrument actually measures or reflects the abstract construct (what it is meant to measure) (e.g. is tool actually measuring anxiety and not stress?)
- Expert opinion/expert panels
- Comparisons to other scales, other events, etc.
Define control as part of quantitative design:
- Measures that researchers use to hold the conditions of the study uniform and avoid possible impingement of bias (extraneous variables) on the dependent variable
- To control treatment, 1st step is make detailed description of treatment, 2nd step is to use strategies to ensure constituency in implementing treatment
- Variations in treatment reduce effect size and internal validity is reduced
What are the four ways to control extraneous variables?
1) homogeneous sampling (similar characteristics)
2) data consistency (collected consistently for everybody in sample)
3) random selection/randomization (assignment to groups)
4) manipulate independent variable (don’t see in non-experimental, as it’s a non-issue; but with experimental, would like to see all four above)
What is the difference between internal validity and external validity?
INTERNAL: extent to which study findings are “true” rather than the results of extraneous variables (factors WITHIN study design)
EXTERNAL: extent to which study findings can be generalized beyond the sample used in the study (apply findings OUTSIDE the study?)
What other factors might account for the changes in dependent variables?
1) Maturation (longitudinal study, things change naturally over time not d/t study)
2) History (something outside sample influences what sample may be)
3) Mortality (who drops out of study? How good are results if you lose a lot of people?)
4) Instrumentation (how reliable and valid are tools we are using?)
5) Testing (test/retest)
6) Selection bias (people who self-select to be in study)
Under what conditions and population could the same results be expected? (external validity)
- Selection effects (who is in study)
- Reactive effects (Hawthorne effect) (when people are being observed they act differently, will return to normal after long observation)
- Measurement effects (if tools are reliable and valid then this is non-issue)
What is a threat to validity?
- Rosenthal Effect: change in participant behaviors d/t researcher expectations; a self-fulfilling prophecy
- Double-blind procedures is a means of reducing bias by ensuring that both those who administer tx and those who receive it do now know which study participants are in the control and experimental groups
- Halo effect: tendency of judges to overrate a performance because participant has done well in an earlier rating or when rated in a different area (e.g. students with high marks in the past may receive a high grade on a substandard paper d/t this effect)
Describe how we critique validity:
- Are there threats to the internal validity of the study? (6 things – history, maturation, etc.)
- Does the design have controls at an acceptable level for threats to internal validity? (4 things of control – homogenous sampling, randomization, etc.)
- What are the limits to generalizability in terms of external validity? (who the sample is, selection, reactive-hawthorne effect, etc.)
How do we critique measurement and data collection in quantitative studies?
- How were data collected? Are data collection methods clearly described?
- Identify all methods of measurement. Are validity and reliability of each instrument described? Are validity and reliability levels adequate?
- Interview questions—do questions address concerns expressed in the problem statement?
- Is the training of data collectors clearly described and adequate?