Chapter_15_Survey Research Flashcards

(46 cards)

1
Q

Question Wording

A
  1. Use the LANGUAGE of the Research Participants
  2. Avoid Unnecessary NEGATIVES
  3. Ask Only ONE Question at a Time
  4. Avoid LEADING and Loaded Questions
  5. Be SPECIFIC
  6. Do Not Make ASSUMPTIONS
    - familiarity with the topic
  7. Address SENSITIVIE Topics Sensitively
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Leading question

A

implies that a certain response
is desired

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Loaded question

A

uses emotional content to evoke a desired response
- e.g. appealing to social values, such as freedom, and using terms with strong positive or negative connotations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Sensitive Questions

A
  1. Intrusiveness
  2. Threat from disclosure
  3. Social sensitivity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Examples of Intrusiveness

A

personal or household income

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Examples of Threat from disclosure

A

criminal behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Examples of Social sensitivity

A

under report
- Alcohol consumption
- Smoking
over report
- Energy conservation
- Physical exercise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

ways to avoid biased responses

A
  • Imply that the behavior is COMMON
  • Assume the behavior and ask about FREQUENCY or other details
  • Use AUTHORITY to justify the behavior
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

To avoid over-reporting of socially desirable behaviors

A
  • Be casual
  • Justify not doing something
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Measurement Levels

A
  1. Nominal
  2. Ordinal
  3. Interval
  4. Ratio
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Considering Measurement Levels

A
  1. Information Content
  2. Statistical Tests
  3. Ecological Validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Information Content

A
  1. Higher level measures contain more information
  2. invalid conclusion if drawn on the basis of a lower level
  3. more sensitive to the effects of IVs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Statistical Tests

A

robustness
- assumptions underlying a particular statistic can be violated without leading to erroneous conclusions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Ecological Validity

A

the natural level of measurement of a variable may not contain enough
information to answer the research question or may not be appropriate to the statistics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Response Formats

A
  1. Comparative Rating Scales
  2. Itemized Rating Scales
  3. Graphic Rating Scales
  4. Numerical Rating Scales
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Comparative Rating Scales

A
  • all possible pairings
    con
  • too many pairs
    solution
  • rank order
    conditions
  • familiar
  • unidimensional
  • understand the meaning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Itemized Rating Scales

A
  • multiple-choice questions
  • classification
  • assess hypothetical constructs
  • nominal/ordinal
    cons
  • skip the item that doesn’t include
  • force to choose even no preferred choice
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

2 Factors determine the number of points to use on Numerical Rating Scales

A
  1. sensitivity of measurement
    - detect small differences in the level
  2. usability of the scale
    - A very large number of scale points can be counterproductive
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Choosing a Response Format

A
  • different formats are highly correlated
  • format might not have a large effect
20
Q

Advantages of Multi-Item Scales

A
  1. assess multiple aspects
    of a construct
  2. total score on a multi-item scale has greater reliability and validity
    - Adding an invalid or unreliable item decreases the scale’s reliability and validity
  3. greater sensitivity of measurement
21
Q

Types of Multi-Item Scales

A
  1. Likert Scales
  2. Thurstone Scales
  3. Guttman Scales
  4. Semantic Differential
22
Q

Likert Scales

A
  1. Write a large number of items
  2. Administer the items to a large number of respondents
  3. Conduct an item analysis
    - internal consistency
  4. The items with the highest CITC comprise the final scale
23
Q

Thurstone Scales

A

Items represent attitudes from highly positive through neutral to highly negative
criteria
1. They must represent the entire range of attitudes
2. they must have very low variance in their judged favorability

24
Q

Disadvantages of Thurstone Scales

A
  1. difficult to create
  2. assumes unidimension
  3. raters’ attitudes influence their assignment of scale values to items
  4. the scale values that judges assign to items can change over time
25
Guttman Scales
- a set of ordered attitude items - a respondent will agree with all items up to a point - and disagree with all items after that point
26
Semantic Differential
rate the concept on sets of bipolar adjective pairs cons ensure that the adjective pairs one chooses for scale items are relevant to the concept being assessed
27
Response Biases
1. Question-Related Biases 2. Estimation Biases 3. Ecological Validity
28
Question-Related Biases
1. Scale AMBIGUITY - label as "frequency" -> numbers 2. Category ANCHORING - respondents use the amounts provided in the response options as cues for what constitutes an appropriate response 3. option PHRASED - in terms of “more than” a certain amount as representing an excessive amount 4. ESTIMATION Biases 5. Respondent INTERPRETATION of Numeric Scales 6. PRIMACY and Recency Effects
29
Estimation Biases
Open-ended questions asking for frequency and amount - how often it occurs rather than count specific instances of the behavior
30
Respondent Interpretation of Numeric Scales
‘0,’ respondents interpreted it to reflect the absence - with the numeric value ‘−5,’ and the scale offered ‘0’ = explicit failures
31
Primacy and Recency Effects
- written questionnaires: people tend to choose an item from near the top of a list of response options - during interviews: respondents are more likely to choose items near the end of the list solution - make simple - random order
32
Person-Related Biases
1. Social Desirability 2. Acquiescence 3. Extremity
33
2 Forms of Social Desirability
1. Self-deceptive enhancement - unknowingly - part of personality 2. impression management - intentionally - depends on motivation
34
Acquiescence
tendency to agree or disagree with statements - lack the skill or motivation - requires a great deal of thought - unsure about how to respond
35
Controlling for acquiescence
balanced measure - half “positive items” - half “negative items”
36
Extremity
give extreme responses on a measure such as by using only the end points
37
Cultural Response Sets
a cultural tendency to respond in a certain way on tests or response scales - measurement confound - systematic error
38
Collectivists
- stronger social desirability - higher on impression management - high acquiescent - prefer using the middle range
39
Individualists
- higher on the self-deceptive positivity
40
literal interpretation fallacy
taking the meanings of scale anchors at face value - unwilling to go any lower
41
Question Order
1. Question Sequencing 2. Context Effects
42
Question Sequencing
- start with easy, interesting, important, and related to the stated purpose - end with demographic - general to the specific
43
Context Effects
responding to one question affects the response to a later question - thoughts or emotions aroused
44
Constructing Questionnaires
1. Use closed-ended questions as possible 2. Use a consistent item format 3. Use a vertical item format whenever possible 4. indicate where one question ends 5. Do not split questions or response options between pages
45
Using Existing Measures
1. use a single response format for all items 2. Context Effects
46
Context Effects
occur when research participants fill out two (or more) questionnaires they believe are related solutions - having two data collection sessions - presenting them as belonging to different studies - counterbalance the order