Test 2 Flashcards

1
Q

T/F Every sample will have an error.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

People who respond to a survey

- Not only consumers

A

Respondents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Collecting data by having people answer a series of questions

A

Survey

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A portion of the population that’s being surveyed

A

sample survey

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The sample differs from the population of interest

A

sampling error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

When there’s a flaw in the survey design

A

Systematic Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The respondent did or did not do something

A

Respondent Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Something a participant DID NOT do

A

Non-Response Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Not contacted or refused to do the survey

A

Non-Respondents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

When the participant decides to take the survey or not

A

Self-Selection Bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Something a participant DID intentionally or unintentionally

A

Response Bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Lying or giving a false answer because you guessed or are bored

A

Deliberate Falsification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The participant is confused on how to answer because the question is vague or ambiguous

EX: How do you rate your education?

A

Unconscious Misrepresentation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Participants agreeing with everything

A

Acquiescence Bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Scoring extremely higher or lower than their true value

A

Extremity Bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Interviewer’s characteristics or body language influence participants responses

A

Interviewer Bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Responding because it is a socially accepted answer or to gain esteem

A

Social Desirability Bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The researcher makes a mistake intentionally or unintentionally about how the data was gathered or improper survey design

A

Administrative Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A mistake made in the data entry phase such as, imputing the data in wrong

A

Data Processing Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

The researcher selected the sample wrong

A

Sample Selection Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

The source or list where participants were selected from is wrong

A

Sample Frame Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

There’s questions in how the results are being measured

A

measurement Bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Interviewer is doing something wrong such as, changing words in the question or not fully recording responses

A

Interviewer Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Interviewer makes up the number of participants or participants responses

A

Interviewer Cheating

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Percentage of people who responded out of the total people who were contacted - This is usually around 5%
Response Rates
26
A brief letter that is sent with a survey to explain what the survey is about
Cover Letter
27
The 5 Cover Letter Functions/Purposes
1. Identifies the surveyor & sponsor 2. Explains the purpose of the survey 3. Why the respondent was selected 4. Provides the incentive for participating 5. Qualifying/Screening questions
28
Personal Interview advantages?
- Opportunity for feedback - Probing complex answers - Length of interview - Completeness of questionnaire - Props and visual aids
29
Personal Inverview Disadvantages
Disadvantages: - Interviewer bias - Anonymity - Expensive
30
Personal interviews conducted in a shopping center or similar public area
Mall Intercepts
31
Personal interviews conducted at respondents' doorsteps in an effort to increase the participation rate in the survey
Door to Door
32
Personal interview that is conducted over the telephone
Telephone Interview
33
What is the only survey method where the researcher is not involved?
Self-administered methods like: - Internet, cell phone, & email surveys - Mail questionnaires - Drop offs - Point of sale
34
The interviewer travels to the respondent's location to drop off questionnaires that will be picked up later
Drop Offs
35
Survey requests distributed through electronic mail
E-mail Surveys
36
Email Survey Advantages?
Advantages: - Speed - Lower cost - More flexibility - Less manual processing
37
Email Survey disadvantages?
Disadvantages: - Possible lack of anonymity - Spam filters - Problems with successful delivery
38
A self-administered survey administered using a Web-based questionnaire
Internet Survey
39
Directs participant to more questions based upon their responses
Branching
40
Inserts the text of participants previous responses
Piped text
41
CATI
Computer-Assisted Telephone Interviews - Randomly dial phone numbers
42
Percentage of people who clicked on the survey
Click-Through Rate
43
Screening procedure that involves a trial run with a group of respondents to discover problems in the survey design
pretesting
44
Describing some property of phenomenon of interest, usually by assigning numbers.
measurement
45
Degree to which someone meets a certain criteria, single variable - IS NOT correlated EX: Social class
Index Measure
46
Assigning a value based on a mathematical derivation of multiple variables - IS correlated EX: Restaurant satisfactory scales
Composite Measure
47
Adding everything together, the sum
Summated Scale
48
Total of the variables / Number of variables
Average
49
The value assigned for a response is treated oppositely from the other items
Reverse Coding
50
Used to classify something into categories or labels - Have nominal or ordinal properties
Categorical Questions
51
Number that expresses a quantity of the property being measured
Metric Questions
52
Have 5 or more scale points
Metric Scales
53
T/F You can take the average for a categorical question.
FALSE. A mode, frequency, or percentage must be used.
54
T/F You can take the average for a metric question.
TRUE
55
T/F Slider scales are not metric.
FALSE. Yes, they are.
56
2 response options to choose from such as, yes or no - CAN select more than 1 answer (Select all the apply questions) - There's no correct answer
Dual Choice
57
3 or more response options to choose from - ONLY 1 answer can be selected - There's 1 correct answer
Multiple Choice
58
T/F Questions that say "Select all that apply" are considered dual-choice questions.
True
59
Number is descriptive of the property being measured | - IT IS meaningful
Natural
60
Number is an artificial measure of some quantity the participant DOES NOT see - IS NOT meaningful
Synthetic
61
One Variable ranks higher than another
order
62
A certain score is higher than another Ex: 2 is 1 point higher than 1
distance
63
Providing consistent data & reproducible results, percise
Reliability
64
Represents a measure's homogeneity or the extent to which each indicator of a concept converges on a common meaning
Internal Consistency
65
Splitting the scale in half to produce similar scores | - This will show if they are correlated or not
Split Half Method
66
Administering the same scale or measure to the same respondents at two separate times to test for stability
Test - Retest Method
67
The accuracy of a measurement or a score that truthfully represents a concept - Purchase intent = purchase!
Validity
68
The items look like what they are intending to measure
Face Validity
69
1 measure is associated to another measure
Criterion Validity
70
Satisfaction & future purchases
Predictive Validity
71
Satisfaction & re-patronage intentions
Concurrent Validity
72
Correlated items that measure the same thing
Convergent Validity
73
Items can be correlated, but should not be correlated too highly
Discriminant Validity
74
The respondent ranks something in order based on overall preference
Ranking Task
75
Ranking Task issues
Issues: - Ordinal measurement - Alternatives not included - Outside of choice set - Can't tell differences
76
Rating how important the attributes are
Rating Scales
77
Asking participants to rate the degree of their agreement such as, "strongly agree, agree, neutral, disagree, & strongly disagree"
Likert Scale
78
A scale where participants describe their attitudes using a series of positive & negative attributes EX: Happy or Sad, Serious or Fun, Formal or Casual, etc.
Semantic Differential
79
Rating everything positively - This can be prevented by flipping the positive & negative attributes
Halo Effect
80
"Anchors" a participant's score along a point value by 2 anchors - Metric - Can be measured by finding the Average EX: Rating between Poor & Excellent
N-Pointed Anchored Scale
81
Also called a "Slider Scale" - Metric - Can be measured by finding the Average
Graphic Rating Scale
82
ONLY scoring on 1 end or the other of a scale
End Piling
83
Having a equal number of both positive & negative options - Most used by Marketers
Balanced
84
NOT having a equal number of both positive & negative options - This reduces the likelihood of end pilling
Unbalanced
85
The participant HAS TO answer the question | - Limited errors
forced choice
86
More than 1 question EX: How satisfied are you with Skybar's music? How satisfied are you with Skybar's cleanliness? How satisfied are you with Skybar's atmosphere?
Multiple Item
87
1 question ONLY EX: What's your OVERALL satisfaction with Skybar?
Single Item
88
Will it answer the research questions?
Relevant
89
How will the data be measured
accuracy
90
Close ended questions where respondents are given specific, limited-alternative responses & are asked to choose the one closest to their own viewpoint
Fixed alternative questions
91
A question that suggests or implies certain answers - Bandwagon Effect - Partial mention of alternatives EX: "Don't you see problems with using your credit card online?"
Leading Questions
92
A question that suggests a socially desirable answer or is emotionally charged EX: "Should people be allowed to protect themselves from harm by using a taser as self-defense?"
Loaded Questions
93
Wording the question so respondents think "everyone is doing it"
Bandwagon Effect
94
Introductory statement to a potentially embarrassing question that reduces a respondent's reluctance to answer by suggesting that certain behavior is not unusual EX: "Some people have the time to brush their teeth three times per day, but others do not. How often did you brush your teeth yesterday?"
Counter biasing Statement
95
Asking 2 things in 1 question EX: "How would you rate the associate's knowledge & helpfulness?"
Double-Barreled Question
96
Wording the question with additional information | - This HELPS a respondent remember their experience
Aided Recall
97
Wording the question without any additional information | - This DOES NOT HELP a respondent remember their experience
Unaided Recall
98
- The first questions asked | - Are used to select the participants who meet the specific criteria required to take the survey
Screening Questions
99
- Are asked immediately after screening questions | - Shows the respondent that the survey is easy to complete & generates interest
Warm up questions
100
- Are asked after major sections of questions or changes in question format - Notifies the respondent that the subjects of questions will change
Transitions question
101
- Are asked in the middle or close to the end | - Respondent is close to completing the survey & is informed there are not many questions remaining
Complicated & Difficult-to-Answer Questions
102
- Are asked at the very end | - These are personal & possibly offensive questions
Classification & Demographic Questions
103
Results when how the questions are ordered affects the way a person responds or when the choices provided favors 1 response over another
Order Bias
104
The ordering of questions throughout a survey -Asking a question that does not apply to the respondent may be irritating or cause a biased response
Survey Flow
105
Focusing on 1 answer & comparing all other answers to it
Anchoring
106
Starting with broad questions then gradually getting into more specific questions -Allows researchers to understand the respondent's frame of reference before asking more specific questions EX: How satisfied are you with your overall life? How satisfied are you with your finances? How satisfied are you with your significant other? How satisfied are you with your career?
Funnel Technique
107
Screens out respondents who are not qualified to answer a second question - "Screening questions" - These usually provide an N/A option for respondents who cannot answer the question
Filter Questions
108
Software programs like Qualtrics that allow special features to facilitate survey design
Survey Technology
109
Having a friend take the survey before its launched to discover problems
Pretest Cpmposition
110
A smaller group of people selected for the entire population
sample
111
A groupe of people with similar characteristics
population
112
EVERYONE in a population is selected EX: 2010 United States income census
Census
113
Who do we sample>
the people we are trying to understand
114
Why sample?
- Pragmatic reasons (less cost, less time, etc.) - Accurate & reliable results - Destruction of test units
115
A list of elements from which a sample may be drawn from | - Also called "Working population"
Sampling Frame
116
Occurs when certain sample elements are not listed or are not accurately represented in a sampling frame - Almost every list excludes some members of the population
Sampling Frame Error
117
Companies who maintain lists of people who are willing to participate in marketing research
Sampling Services
118
Lists of respondents who have agreed to participate in marketing research along with the email contact information for these individuals
Online Panels
119
The difference between the sample result & the result of a census - Larger sample size decreases these errors
Random Sampling Error
120
The difference between the sample value & the true value of the population mean - Function of n - Margin of error
Chance Variation
121
Errors in the execution of the study's design EX: How the researcher selects the sample
Systematic Non-Sampling Error
122
Sampling procedure that ensures that various subgroups of a population with a certain characteristic will be represented to the exact extent that the researcher wants - NOT randomly selected - Also called "Demographically-matched sampling" EX: A set number participants who own cats
Quota Sampling
123
ARE NOT random samples because they are a convenience sample - People make the choice to participate or not Randomly select sampling units -Survey software can help EX: Nth visitor or visitor needs to stay on the page for 30 seconds EX: Frequent visitors
Website Visitors
124
Every participant has a chance of being selected | - The chance is KNOWN
Probability Sampling
125
Every participant has a chance of being selected | - The chance is UNKNOWN
Non-Probability Sampling
126
T/F Market research usually relies on probability sampling. | FALSE. It usually relies on non-probability sampling.
FALSE. It usually relies on non-probability sampling.
127
Sampling people who are easy to find or gather data from EX: Using Facebook to find participants
Convenience Sampling
128
Using an experienced researcher's judgment to select the participants - Test market cities - Incident rates
Judgement Sampling
129
Specific cities to sample from because they have a diverse population
Test Market Cities
130
The percentage of participants with the characteristic needed
Incident Rates
131
Asking initial respondents to refer additional respondents to take the survey - Similarity - Focus groups - This works best with LOW incident rates EX: Asking a respondent who has had plastic surgery to list others they know who have gotten plastic surgery too
Snowball sampling
132
Assures each element in the population has an equal chance of being selected - Assigning a number then randomly selecting - Random digit dialing EX: Winning the lottery because there's a 1 out of 10 chance the ball will be your number
Simple Random Sampling
133
A starting point is selected by a random process & then every nth number on the list is selected - Initial starting point is created using a random number generator - Skip interval
Systematic Sampling
134
A skip interval is calculated by dividing
Population size / Desired sample size
135
Simple random sub-samples that are more or less equal on some characteristic are drawn from within each stratum of the population - Similar to a Quota sampling, but this IS randomly selected - Has a select stratification variable
Stratified Sampling
136
This must be a characteristic of the population elements - Is known to impact the DV - Is a grouping variable - The mean is analyzed EX: Customer firm size
Stratification Variable
137
Randomly selecting clusters or elements within subgroups | - 1 step versus 2 step
Cluster Sampling
138
Selecting a cluster or multiple clusters based upon where they are geographically
Area Cluster Sampling
139
A tendency for respondents to agree with the viewpoints expressed by a survey
Acquiescence Bias
140
An error caused by the improper administration or execution of the research task
Administrative Error
141
Attempts to try and contact those sample members missed in the initial attempt
Call backs
142
Letter that accompanies a questionnaire to induce the reader to complete and return the questionnaire
Cover Letter
143
A category of administrative error that occurs because of incorrect data entry, incorrect computer programming, or other procedural errors during data analysis
Data Processing Error
144
A survey method that requires the interviewer to travel to the respondent's location to drop off questionnaires that will be picked up later
Drop off method
145
A category of response bias that results because some individuals tend to use extremes when responding to questions
Extremity Bias
146
Communication that allows spontaneous two-way interaction between the interviewer and the respondent
Interactive Survey Approaches
147
Potential respondents in the sense that they are members of the sampling frame but who do not receive the request to participate in the research
No Contacts
148
Two-way communication by which respondents give answers to static questions that do not allow a dynamic dialog
Noninteractive Survey Appraches
149
The statistical differences between a survey that includes only those who responded and a perfect survey that would also include those who failed to respond
Nonresponse Error
150
Refers to some true value of a phenomenon within a population
Population Parameter
151
Screening procedure that involves a trial run with a group of respondents to iron out fundamental problems in the survey design
Pretesting
152
A bias that occurs when respondents either consciously or unconsciously answer questions with a certain slant that misrepresents the truth
Response Bias
153
A more formal term for a survey emphasizing that respondents' opinions presumably represent a sample of the larger target population's opinion
Sample Survey
154
Error resulting from some imperfect aspect of the research design that causes respondent error or from a mistake in the execution of the research
Systematic Error
155
Errors due to the inadequacies of the actual respondents to represent the population of interest
Sampling Error
156
Sampling errors are caused by
- Method of sampling used | - size of the sample
157
Sampling errors are reduced by
- Increasing the size of the samples | - using an appropriate sampling method
158
Types of Survey Methods:
1. Person-administered 2. Self-administered 3. Telephone-administered
159
Data collection methods that require the presence of a trained human interviewer who asks questions and records the subject's answers (in-home,mall-intercept)
Person-Administered Surveys
160
Advantages of Person-Administered Surveys:
- Adaptability - Rapport - Feedback - Quality of responses
161
A _________ is a scale type that has respondents describe their attitude using a series of bipolar rating scales.
semantic differential
162
____ ______ assign number and letters for identification
nominal scale