W2 Flashcards

(79 cards)

1
Q

What are the types of scaling techniques?

A
  • Comparative scales:
    • Paired comparison
    • Rank order
    • Constant sum scaling
  • Non-comparative scales:
    • Continuous rating scale
    • Itemised rating scale:
      • Likert scale
      • Semantic differential scale
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are comparative scales?

A

They compare objects directly, like asking if you prefer A or B more. They show relative preferences, not absolute ones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a paired comparison scale?

A

You pick one option from a pair based on a criterion, like choosing BMW over Porsche. It’s simple but slow when there are many pairs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a rank order scale?

A

You rank items in order of preference. It feels natural and quick, but only shows relative likes, not actual differences in preference.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is constant sum scaling?

A

You divide a fixed number (like 100 points) across items based on importance. It’s more accurate and feels like real decisions, but more work for participants.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the pros of comparative scaling techniques?

A

They feel realistic and require fewer assumptions. They reduce carryover effects and are easier to understand.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the cons of comparative scaling techniques?

A

They only show relative preferences and are harder to generalize beyond the specific items being compared.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are non-comparative scales?

A

Each item is rated independently. You’re not comparing options directly but scoring them one by one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a continuous rating scale?

A

You mark a spot on a line that goes from one extreme to another, like rating involvement from ‘not at all’ to ‘extremely’. It’s flexible but hard to score.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is an itemised rating scale?

A

It has defined response categories, like a 5-point or 7-point scale. It’s easier to score than continuous scales.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a Likert scale?

A

A 5-point scale where you say how much you agree with each statement. It’s easy to use but responses like ‘neutral’ can be confusing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a semantic differential scale?

A

A 7-point scale with opposite adjectives at each end, like ‘strong - weak’. It helps compare things with subtle meanings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the pros of non-comparative scaling techniques?

A

They give absolute scores, are easy to understand, and allow detailed analysis. They’re great for surveys with many different objects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the disadvantages of non-comparative scales?

A

They rely more on assumptions and scoring can be tricky. Continuous scales can be unreliable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the guidelines for creating good itemised rating scales?

A
  • Use 5 to 9 categories
  • Keep the scale balanced
  • Use odd numbers if a neutral response is okay
  • Use even numbers if you want to force a choice
  • Match labels to what you’re asking
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a single-item scale?

A

It asks one simple question, usually about surface-level things like age, gender, or brand liking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a multi-item scale?

A

It uses several related questions to measure a deeper concept like satisfaction or motivation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

When should you use single-item scales?

A

When the object being rated is simple (e.g., a brand) - When the rating is simple and clear to everyone (e.g., how much they like an ad)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

When should you use multi-item scales?

A

When the concept is abstract and needs multiple questions to be measured well, like employee wellbeing or job satisfaction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is a reflective scale?

A

All items reflect the same construct, like asking the same thing in different ways. Example: “I eat healthy,” “I don’t eat junk,” “I have a balanced diet.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is a formative scale?

A

Each item measures a different part of the construct. Example: “I have a balanced diet,” “I exercise,” “I sleep well” measure overall health.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is validity?

A

Validity is how well a measurement reflects the true differences of the concept being measured — not just error or noise.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What does high validity imply?

A

It means the measurement has little systematic or random error and is accurately capturing what it’s supposed to.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is measurement error?

A

It’s the difference between the observed score and the true score. It includes both random and systematic error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is the formula for observed score?
X₀ = Xₜ + Xₛ + Xᵣ (Observed score = True score + Systematic error + Random error)
26
Why do we consider measurement error?
Because participants might misunderstand a question, get distracted, or have a bad day — so we can't assume perfect accuracy.
27
What does a systematic error look like?
Consistent errors that are off-target but grouped together — like everyone slightly misinterpreting a question the same way.
28
What does a random error look like?
Errors scattered all over the place with no clear pattern — unpredictable and inconsistent mistakes.
29
What is convergent validity?
It checks if items that are supposed to measure the same thing actually correlate highly with each other.
30
What is considered high, moderate, and low correlation in convergent validity?
High: ≥ 0.6, Moderate: 0.2–0.6, Low: < 0.2
31
What are we looking for in convergent validity?
We want items from the same construct (like B001, B002, B003 for 'willingness to pay') to have high correlations.
32
What does it mean if items in the same construct have low correlation?
It’s a problem for convergent validity — it means the items may not be measuring the same thing.
33
What is discriminant validity?
It checks that items from different constructs (e.g., 'willingness to pay' vs. 'environmental concern') have low correlation.
34
What do low correlations in discriminant validity indicate?
That the items are unrelated as they should be — this supports the discriminant validity of the constructs.
35
What kind of numbers are we hoping to see for discriminant validity?
Low correlations (e.g., under 0.2 or even negative, like -0.2 to -0.8) to show that the constructs are distinct.
36
What is the difference between reliability and validity?
Reliability is about consistency and has little random error. Validity is about measuring the right thing. Reliability is required for validity.
37
What is internal consistency reliability?
It checks if multiple items measuring the same concept give consistent results.
38
What is split-half reliability?
You split the items in a scale into two halves and see if the results from both halves are similar.
39
What is coefficient alpha (Cronbach’s Alpha)?
A measure of how closely related a set of items are. It uses the number of items and their average correlation.
40
What does a high Cronbach’s Alpha mean?
It means items are strongly related, showing high internal consistency.
41
What are the general rules for Cronbach’s Alpha values?
0 to 0.6 = bad, - 0.6 to 0.7 = moderate, - 0.7 to 0.8 = sufficient, - 0.8 to 0.9 = very good, - 0.9 to 1 = suspiciously high
42
What does Cronbach’s Alpha measure exactly?
It measures the level of agreement across test items — higher values mean better reliability.
43
What is the formula for Cronbach’s Alpha?
α = (N * 𝑐̄) / (v̄ + (N − 1) * 𝑐̄), where N = number of items, 𝑐̄ = average covariance, and v̄ = variance
44
How can you calculate alpha in R?
Use the function alpha() — it is commonly used when doing factor analysis.
45
What is the purpose of factor analysis?
It helps simplify many variables into a few factors that represent patterns in the data.
46
What are the main aims of factor analysis?
1. Reduce many variables into fewer factors 2. Structure correlating variables 3. Understand unmeasured constructs 4. Handle measurement errors
47
What is a population in research?
The entire group of people, items, or events you want to make conclusions about.
48
What is a census?
A complete measurement of every element in the population.
49
What is a sample?
A subgroup of the population that is selected to represent the whole group.
50
What is the first step in the sampling design process?
Define the target population — the full group that has the info the researcher is looking for.
51
What is an element in sampling?
A single member of the population, like one person or one household.
52
What is a sampling unit?
The specific item or person available for selection, such as a household or a phone number.
53
What is the second step in sampling design?
Determine the sampling frame — a list or method that helps you identify and reach the target population.
54
What are examples of a sampling frame?
Customer databases, phonebooks, or lists of registered users.
55
Why is a sampling frame important?
It helps you reach your target population accurately, but it can be hard to get or outdated.
56
What are some issues with sampling frames?
Missing people like those without phone numbers or those who bought something anonymously can skew the results.
57
What if your sampling frame is missing or not good enough?
You can choose a non-probability sampling method, redefine your population, screen participants, or adjust your data.
58
When should you move on to selecting sampling techniques?
Once your sampling frame is fully adequate — if not, fix it or switch to non-probability methods.
59
What are probability sampling techniques?
Sampling methods where each element has a known chance of being selected. Results can be generalized to the population.
60
What are non-probability sampling techniques?
Methods where elements do not have a known chance of being selected. Less generalizable but easier and cheaper.
61
What is convenience sampling?
You pick the easiest people to reach, like students or online surveys. It’s quick and cheap, but not representative.
62
What is judgemental sampling?
You choose participants based on expert judgment. Useful but subjective and not very generalizable.
63
What is quota sampling?
You select people to match a set quota (e.g., 50% male, 50% female). Easy to control groups but not fully representative.
64
What is snowball sampling?
You start with a few participants and ask them to refer others. Great for rare populations but highly biased.
65
What is simple random sampling?
Everyone has an equal chance to be selected. It’s fair and easy to understand but doesn’t guarantee perfect representation.
66
What is systematic sampling?
You choose every nth person from a list (e.g., every 10th). It’s simple but can be biased if there’s a pattern in the list.
67
What is stratified sampling?
You divide the population into subgroups (strata) and sample from each. It’s precise but takes time.
68
What is proportionate stratified sampling?
The sample size from each subgroup matches its proportion in the population.
69
What is disproportionate stratified sampling?
You sample more or less from some subgroups than their population size suggests, based on research needs.
70
What is cluster sampling?
You split the population into clusters and randomly select entire clusters. It’s cheap and easy but less accurate.
71
What is two-stage cluster sampling?
You first pick clusters, then randomly select individuals or households within those clusters. Useful for large areas.
72
How does two-stage cluster sampling work in practice?
Example: First pick random city blocks, then pick every 5th or 7th house in each block.
73
What is a coverage error?
It happens when the sampling frame doesn’t match the target population. Some people are left out or wrongly included.
74
What are two types of coverage error?
(a) Under-coverage: true population members are excluded (b) Miss-coverage: non-population members are included
75
What can you do if your sample is small and you notice coverage error?
You can sometimes ignore it, but with large samples you should redefine the population based on the frame.
76
What is a sampling error?
It’s a deliberate deviation caused by the sampling technique — often due to practical limits like cost or access.
77
What causes sampling error?
(a) Interest in specific groups (e.g., quota sampling) (b) Lack of access to some people (e.g., snowball sampling) (c) Budget issues (e.g., cluster sampling)
78
Why should you be careful about sampling choices?
Because both coverage and sampling errors can mess up your results if not handled properly.
79
What is sample sizing related to?
Confidence and power levels (higher levels need bigger samples) - Effect size (smaller effects need bigger samples) - Survey complexity (complex surveys need bigger samples) - Model complexity (complex models need bigger samples)