Module 7 Flashcards

(46 cards)

1
Q

In survey research

A

: concepts are operationalized through questions, and observations consist of recording respondents’ answers to these questions.

A survey consists of many questions, usually across a wide range of question types. The terms survey and questionnaire are often used interchangeably. Those who answer the questions are generally referred to as respondents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Cross sectional surrey vs longitudinal survey

A

Cross-sectional surveys: capture a snapshot of a population at a specific point in time. They’re like a Polaroid camera, freezing one moment for analysis.

Longitudinal surveys : track the same group of respondents over time, much like a time-lapse video. They help to identify trends.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Polls:

A

contain just a single or a few questions. Polls can thus be thought of as a special type of short survey.

Because they are so short, polls only allow for descriptive research. Descriptive research is research that draws a detailed picture of the current state of affairs. It’s like a magnifying glass, examining the prevalence of a phenomenon or attitudes within a group. Surveys, on the other hand, allow for explanatory research: they allow deep diving into the reasons behind certain outcomes. They’re the research world’s version of Sherlock Holmes, unraveling theories of association.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A census:

A

is a survey of the entire population. It does not use a sampling method. All members of the population participate in the census.

Surveys differ from censuses in their use of sampling a population. Surveys do not attempt to collect data for every member of the population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When to use survey research?

A

In survey research, all concepts are operationalized through questions, and observations consist of recording respondents’ answers to these questions. This research strategy is therefore particularly suited for studies in which individuals (consumers, managers, investors, etc.) are the unit of analysis.
Survey research is particularly useful for discovering individuals’ perceptions, opinions, attitudes, and behaviours.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Guidelines for question wording -> of a research

A

Use simple words to increase your respondents’ understanding
Always use the simplest words possible to communicate effectively. More difficult words are more likely to confuse respondents.

Avoid jargon/abbreviations unless your respondents widely understand these

Avoid long sentences
Always use the shortest form of a question to communicate effectively. Longer questions are more likely to confuse and bore respondents. In any case, drop unnecessary adjectives. Also, avoid sub clauses if possible.

Avoid using ambiguous terms that may have individually defined meanings

Avoid double negatives
Questions that are negatively phrased may confuse respondents. Double negatives are even worse and may cause respondents to answer just the opposite of what they meant.

Avoid double-barreled questions
A double-barreled question occurs when you ask two different questions in one, but allow for only one answer. Double-barreled questions are questions that ask about two or more different topics or aspects in one.

Avoid leading questions
Leading questions are questions that influence or persuade (“lead”) the respondent to answer in a certain way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why survey questions are also called items

A

Survey researchers collect data from respondents by asking them questions in a questionnaire. Despite being called ‘questions,’ survey questions can be questions as well as phrases. That is why many researchers refer to them with the broader term “item”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Guidelines for creating open-ended questions

A

When asking for longer descriptions, provide statements to impress upon the respondents the importance of their response

  • Using phrases such as “this question is very important” and “please take your time answering this question” have been found to increase the length of the response (i.e., the number of words) and the time spent answering the question. However, it is best to add such phrases only when needed

When asking for numerical responses, indicate the specific unit desired in the question stem and provide unit labels with the answer space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Guidelines for creating closed-ended questions

A

Closed-ended questions can have several formats. Three types that are often used are:

rating questions, where the respondent is asked to rate a statement.

comparative questions, where the respondent is asked to rank order something.

categorical questions, where the respondent’s answer can fit only one category.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Rating questions

A

Rating questions are the most popular type of survey question. Rating questions ask respondents to rate their beliefs and perceptions (e.g., their level of agreement, satisfaction, etc.) on a numerical scale. Two different types of rating questions that are often used in survey research are Likert scales and semantic differential scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Likert scale

A

A Likert scale question asks respondents to agree or disagree with a statement. The question ‘How much do you agree with a statement?’ is provided alongside a 1-5 scale, where 1 is strongly disagree, while 5 is strongly agree. Alternatively, other number ranges may be used (e.g., 1-7).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Semantic differential scale

A

Semantic differential scales use a pair of polar-opposite adjectives or phrases at the extremes of the scale, on the left and the right, and respondents are asked to indicate their attitudes on what may be called a semantic space toward a particular individual, object, or event.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Guidelines on using rating scales

A
  • Include a middle option
  • How many scale points to include? (5-7)
  • How should I label the response options?

(Some surveys only label the end-points. Others also label the midpoint. The most accurate surveys will have a clear and specific label that indicates exactly what each point means so that there is no room for different interpretations between respondents. )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Comparative questions

A

Comparative questions are used to tap preferences between two or more objects.
Two of the most common comparative questions are

rank ordering scales

and

constant sum scales.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Rank ordering scale

A

Respondents rank objects relative to one another, among the alternatives that are provided. In the example below, a fitness tracker manufacturing company wants to know what features are ranked most important by their customers:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Constant sum scale

A

Respondents divide a budget of points (often 100 points) amongst a set of options according to their personal preferences.

For example, you can ask respondents to allocate 100 points on how they spend their income. Say they spend 40 on groceries, 20 on entertainment, 30 on utilities, and 10 on miscellaneous expenses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Guidelines on using comparative scales

A

Limit the number of things to rank (Longer lists, especially those exceeding 10 items)

All of the things in the list must be things the respondents are familiar with.

Don’t overuse it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Categorical scales

A

A categorical scale is a scale where respondents choose from a limited number of discrete answer categories. These can be ordered or unordered, as in the examples below.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Guidelines on using categorical scales

A

Make responses mutually exclusive

Provide exhaustive answers

Avoid “check all that apply”

20
Q

The sequence of designing a survey follows a logical progression, in basic form idea is what?

A

The sequence of designing a survey follows a logical progression,

starting with a decision on the survey mode,

followed by obtaining participants’ consent.

The questions come next, ordered in a particular way.

The conclusion of the survey should express gratitude for participation and provide any follow-up instructions.

This structured approach ensures a well-designed survey that yields meaningful data for business research.

21
Q

Picking a survey mode

A

Surveys can be administered in various ways. A survey mode or survey method is the way you decide to administer or distribute your survey.

The most common survey modes are:
online
paper
telephone
face-to-face.

22
Q

Online survey :

Benefits of online surveys:

Reach a large audience. Online surveys allow you to reach out to a large and global audience. You can gather responses from thousands of people at any point in time. You can reach people on the opposite side of your country or the other side of the world to gather data for your research.
Affordable. Online surveys are the most affordable survey mode.
Templates are available. Free templates are available to help you get your survey ready quickly and easily. See, e.g., Qualtrics and SurveyMonkey.

A

Limitations of online surveys :

Coverage bias : Certain portions of the population may not have easy access to the internet—look closely at your target demographic to determine if this is an issue. To eliminate coverage bias, those without internet access can be asked to complete the survey via other means.

Survey fatigue : We frequently receive requests to participate in online surveys. Whether you purchase from Amazon, pick up lunch at McDonald’s, or take a class, everyone wants us to take their online survey. This has contributed to survey fatigue, and people are often only filling out online surveys if they are unhappy about something.

23
Q

major ways to recruit participants for online surveys:

A

river sampling and panel sampling.

24
Q

River sampling :

A

River sampling : The simplest approach to recruiting respondents online is river sampling. River sampling means recruiting respondents by inviting them to follow a link to a survey placed on a web page, email, or somewhere else where it is likely to be noticed by members of the target population.

25
Panel sampling :
Panel sampling : In panel sampling, researchers select members of a preassembled panel to take part in their survey. This is a great way to guarantee responses since panel members have already agreed to participate in the research. Since you know particular information about the panel members, you can ensure that all survey respondents meet specific criteria and that you're reaching your target audience. Two types of panels exist: - online probability panels - online non-probability panels
26
Online probability panels :
Online probability panels select individuals for their panels through probability-based sampling methods. A few examples: American Life Panel (RAND) Ipsos Knowledge Panel These panels are relatively expensive to set up and maintain. For these reasons, the use of probability-based online panels remains rare
27
Paper survey :
Benefits of paper surveys Paper surveys are an excellent alternative when your target audience's internet access or internet knowledge is limited. Respondents give more honest answers when compared to other modes. Respondents trust paper surveys more than online surveys since all of us have been told repeatedly not to click on links from people/organisations that we don’t know. Limitations of paper surveys Cost of printing, postage, etc. Respondents may only answer certain questions, leaving an incomplete response. If your study requires an alternating question order, paper surveys may be too costly to support this requirement.
28
Phone survey :
Benefits of phone surveys Extensive geographic access since most people have a phone. Easy access to a sampling frame since phone numbers can easily be purchased from sample companies. Interviewers can encourage respondents to answer all questions. They can provide assistance in case the respondent is confused about (any part of) the survey. Limitations of phone surveys Intrusive, since telephone surveys are usually done without notice. Interviewers may be perceived as telemarketers and, consequently, turn off respondents. There is a high risk of respondents not being completely honest—giving brief answers to end the call sooner or changing their responses because they are speaking to someone directly.
29
Face to face survey
Benefits of face-to-face surveys Interviewers can encourage respondents to answer all questions. They can assist in case the respondent is confused about one or more questions. Interviewers can take advantage of the five senses of their respondents. Aside from offering audio and visual stimuli, the researcher can let respondents touch, taste, and smell materials to support the interview. Limitations of face-to-face surveys Face-to-face surveys can take longer. They can last for weeks, depending on the number of respondents needed. Face-to-face surveys are considerably more expensive than paper, online, and phone.
30
Mixed-mode surveys
U don't always have to use one mode for a survey Mixed-mode surveys combine different ways (modes) of collecting data for a single research project. You may use mixed-mode survey designs to address problems associated with the under-coverage of key groups of interest or to improve response rates. One type of mixed-mode survey that is used often is the combination of an online and a paper questionnaire. A major advantage of this type of mixed-mode survey is that it allows people who lack access to online surveys to participate via paper, while the researchers save money by reducing the amount spent on postage when an online survey is used to collect some of the responses.
31
Informed consent
Certain laws and regulations (such as the GDPR) require that respondents clearly agree upfront to participate in a survey study. Therefore, all surveys must start with informed consent. As the survey owner, you must communicate the following information before a person starts the survey: the purpose of the study what happens with the data after receiving them whether the data are confidential respondents' right to terminate the survey at any time how and where respondents can obtain the results from the study once it is completed
32
Question order
When constructing your questionnaire, the questions should not per definition follow the order in which they are listed in your operationalization table. Here are some general principles about the order in which your questions appear best: Start with the most easy, straightforward questions, as this will encourage respondents to continue with the questionnaire. Don’t start with awkward or embarrassing questions, as respondents may give up. Then, move to questions that require more thought. Put the most important items in the middle of the questionnaire. By this stage, most respondents should not yet be bored or tired. Leave demographic and personal questions until the end. These questions may appear irrelevant to the stated purpose of your questionnaire and are, therefore, best left until the end. When presenting these questions, consider counterbalancing to minimize order effects, presenting them in a logical sequence that aids comprehension.
33
what to do when Closing the questionnaire
Closing the questionnaire At the end of your questionnaire: you thank the respondent for completing the questionnaire you restate who they may contact (name, email, phone) for any questions they may have in the case of a paper survey, you give instructions on how to return the questionnaire Sometimes, you may indicate that you will make a summary of your research findings available. If you do make this offer, do not forget to follow up on it! It is usually a good idea to leave an open-ended comment question at the end.
34
Measurement reliability in survey research
Single-item measures Single-item survey measures for abstract constructs tend to have low measurement validity. This is because single-item measures often fail to capture the full breadth and depth of an abstract construct. Using only one item may overlook important nuances within the construct, leading to low measurement validity. Multi-item measures The measurement reliability of multi-item survey measures is evaluated using Cronbach’s alpha.
35
Measurement validity in survey research
Measurement validity in survey research Three measurement validity threats that are specific to survey research are (i) response sets, (ii) social desirability bias, (iii) survey-mode bias.
36
Response sets
Response sets are a shortcut people can take when answering a series of survey questions. Especially towards the end of a long survey, people might answer all questions positively, negatively, or neutrally rather than think carefully about each question. Yea-saying: also known as acquiescence bias, occurs when people consistently say "yes" or "strongly agree" to every question instead of thinking carefully about it. Nea-saying: occurs when people consistently say "no" or "completely disagree" to every question instead of thinking carefully about it. Fence-sitting: occurs when people consistently choose the middle neutral option, suggesting that they do not have an opinion while they actually do. In all three instances, measurement validity is hampered
37
Social desirability bias
Social desirability bias is the tendency of survey respondents to answer questions in a manner that others will view favourably. Respondents may over-report "good behaviour" or under-report "bad behaviour." Or example, most people would deny that they drive after drinking alcohol because it reflects poorly on them, and others would most likely disapprove. This is an example of measurement validity bias because the researchers do not measure what they really intend to measure.
38
How to minimise the social desirability problem in survey research.
Essentially u minimise using deliberately leading and/or loading the questions to make the sensitive subjects normal everybody do it .... have u ....? for example
39
Survey-mode effects
A survey-mode effect is a systematic difference that is attributable to the survey mode chosen. It occurs when respondents answer at least some questions differently depending on the survey mode used,
40
Expert judgement & pilot testing to assess and improve measurement validity
How can you assess the measurement validity of the survey measures that you constructed, and how can you improve that validity before sending out the questionnaire? use Expert judgement: and Pilot testing:
41
Expert judgement:
You can ask one or more experts to comment on the extent to which your measures capture your construct definitions. Based on their comments, you can amend your questions before the next step: pilot testing.
42
Pilot testing:
Second, before sending out your questionnaire to collect data, you should ALWAYS pilot-test it. Through a pilot test, you can ensure that respondents understand the questions as they are intended. You may be tempted to skip the pilot test to save time. Do not give in to this temptation! It may break up sourly. When you are extremely pushed for time, it is still better to pilot test your survey using friends or family than not at all ... You may wonder: How many people should I include in my pilot test? You should ensure that the number of people in the pilot is sufficient to include major variations in the data that might affect responses. There is no fixed number for this. As a rule of thumb, for most student surveys, the minimum number of people to include in a pilot is 10.
43
Internal validity in survey research
Internal validity is the extent to which a study can rule out alternative explanations. To establish the true relationship between variables (e.g., X and Y), a researcher needs to remove the influence of extraneous variables. The less chance there is for "confounding" in a study, the higher the internal validity and the more confident one can be in the findings. So, how can you improve the internal validity of survey research? By including questions related to control variables in your questionnaire! If a third variable is related to your DV but is unrelated to your IV, then it will not influence the relationship between your IV and DV and there is no need to control for it. If a variable is related to your DV and is also related to your IV, then it may bias the relationship between your IVs and DV if you do not control for it. response rate = #people who answered the survey / # people samples for the survey generally low : Internal rate : 30-40% External survey: 10-15%
44
How to increase your response rate?
Maximize rewards of participation * Show appreciation * Use interesting/friendly questionnaires * Offer tangible rewards Minimize costs of participation * Minimize time and effort required * Minimize chance of feeling threatened by questions Maximize trust * Ensure anonymity / confidentiality * Open lines of communication with participant * Identify research with well-known, legitimate organization
45
Is a low reponse rate a problem
It is important to realise that a low response rate is not always a problem. The question is whether respondents differ significantly from non-respondents. Non-response is only a problem if there are systematic differences between the characteristics of respondents and non-respondents, and if such differences affect the findings.
46
Problem: How to compare respondents with non-respondents, given that the non-respondents did not return the questionnaire?
Solution: Compare the characteristics of early respondents with those of late respondents (e.g., respondents who only filled out the survey after the final reminder). The idea is that the characteristics of these last-minute respondents will resemble those who did not bother to respond at all. If you can conclude that early respondents do not differ significantly from late respondents, you can infer that it is unlikely that non-response bias will have biassed your findings.