Quantitative data Flashcards
(18 cards)
Classic experimental design
-Outcome/dependent variable is measured before and after the intervention
-John Stuart Mill described the experiment as the ‘method of difference.’
Why is the control group important?
-Accounts for changes in the outcome/dependant variable that might occur in the asbsence of treatment
-Placebo effect in clinical trials
Composition of the control group
-Aim is equivelance between the experimental and control groups
-Individuals are either randomly assigned to each group or matched on the basis of relevant characteristics
Problems of external validity
-Difficult to achieve in practice
-Real life studies have to rely on self-selection
-Hawthorne effect
Why is classic experiment not always used in management research?
–Subjects are often complex
-May not be possible to observe effects in the lab
-More pragmatic designs based on “real-world data” are available
The field experiment
-Seeks to improve external validity
-Sameple is representative of population under study
-Often used with social/economic policy objectives
-Trade-off between internal validity and greater realism
What are Quasi-experiments?
A research design that resembles an experiment but lacks full random assignment of participants to treatment and control groups. These studies are often used in real-world business settings where randomization is difficult or impossible.
What is a concept?
-Building blocks of theory and the focus of management research
-Purpose of quantitative data is to measure concepts
-Concepts can be used as dependent or independent variables
What are Single-item indicators?
measurement tools that assess a construct using a single question or item
What are Multiple-item indicators?
Multiple-item indicators are measurement tools that use several questions (items) to assess a single construct in research. Instead of relying on one question, multiple items help capture different dimensions of a concept, improving reliability and validity in management research.
What is Measure Reliability?
The consistency of how a concept is measured
Define internal reliability
The degree to which multiple items in a scale measure the same underlying construct consistently.
Stability
-Are measures consistent over time?
-Detected by the test-retest method
What is Inter-rater agreement
-Analysed by comparing subjective judgements of multiple researchers
What is measurement validity?
-Do indicators accurately measure the concept they are designed for?
What is Face validity?
-Does the measure appear reasonable?
-Essentially an intuitive process
What is Convergent validity?
-Compares measures of the same concept developed using other approaches
What is Discriminant validity?
Compare with measures of related but different concepts developed using similar methods