Chapter 18 Flashcards

1
Q

What is a questionnaire?

A

A tool for systematically gathering information from study participants.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A good questionnaire is:

A

Carefully crafted for a specific purpose.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Questionnaire design usually works best when:

A

1) Starts with the identification of the general and specific content to be covered by the survey instrument.
2) Progresses to choosing the types of questions and answers for each topic to be assessed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The questions within each section and the section themselves should be _____. The formatting of the document should be _____. The survey instrument should be _____.

A

In logical order; visually appealing and easy to read; pre-tested and revise as necessary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the nine steps of a questionnaire design plan?

A

1) Identify general question categories
2) Select specific question topics
3) Choose question and answer types
4) Check wording
5) Choose order
6) Format layout
7) Pre-test
8) Revise
9) Use


How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The first step in designing a questionnaire is:

A

To list the topics that the survey instruments must cover (exposure, disease, and population areas that are the focus of the study question).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are potential confounders?

A

Factors that might influence the relationships between key exposures and outcomes (the questionnaire must include them).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are some question areas?

A

1) Demographics
2) Key exposures
3) Key diseases/outcomes
4) Related exposures and outcomes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The questionnaire must include questions confirming what?

A

That participants meet the eligibility criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The questionnaire must be able to accurately place participants into:

A

Key categories. (Cohort study may ask about exposure and disease status; Case control studies may ask to confirm case and control definitions)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A survey that is too short will _____. A survey that is too long may _____.

A

Miss potentially crucial information; yield a low response rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Decisions about the types of questions to ask must include considerations for what?

A

Statistical tests the researcher wants to be able to run on the collected data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are closed-ended questions?

A

Questions that allow a limited number of possible answers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Give one pro and one con for closed-ended questions.

A

Pro: They are easier to statistically analyze

Con: They may force respondents to select answers that do not truly express their status or opinions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Give one pro and one con for open-ended questions (Free response).

A

Pro: Allow participants to explain their selections and qualify their responses, give multiple answers, and provide responses not anticipated by the researchers

Con: Take longer to ask and answer, and they may result in irrelevant answers; recoding answers into objective and meaningful categories for statistical analysis is time consuming and imprecise.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

When are open-ended questions most useful?

A

When they are used to capture initial impressions or to clarify responses to closed-ended questions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the three formats that closed-ended questions come in?

A

1) Date and time variables
2) Numeric variables
3) Categorical variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the four formats of categorical questions?

A

1) Dichotomous (yes/no)
2) Dozens of possible answers
3) Ordinal (ranked)
4) Nominal (unordered)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What does anonymity do?

A

Protects participants and allows them to provide honest answers to sensitive questions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Questions must be framed in a way that

A

Protects the respondents identity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

For numeric responses the question should state:

A

How specific the answers should be

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

For categorical questions responses:

A

The response categories should be listed and should include an “other“ category.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

For ranked question responses, decisions must be made about:

A

How many entries to include on the scale (and whether there will be a neutral option. (Neutral options list 5 to 7 categories; no neutral options list four or six categories). {Likert scales/items}

24
Q

For a self-report survey, a decision must be made about whether to add a category for:

A

“Not applicable” or “ I don’t know“ or “No answer“.

25
Q

Anyone who skips an eligibility question when completing the survey form will be deemed:

A

Ineligible for inclusion in the analysis.

26
Q

If you neglect to add an “I don’t know” answer to a question that needs it, what kind of bias may occur?

A

Information bias; by forcing participants to provide an answer they assume the researcher wants to hear

27
Q

What is habituation?

A

Occurs when respondents have given the same answer to so many questions in a row, but they continue to reply with the same response, even one that does not reflect the true perspectives, because that response has become routine.

28
Q

How should the questions be ordered?

A

First: An open ended question to garner a first impression from participants
Second: A series of yes/no questions to clarify beliefs, perceptions, and practices
Third: A concluding, open ended question to allow participants to express final impressions

29
Q

What problems should question wording avoid?

A

1) Big words/jargon
2) Undefined abbreviations
3) Ambiguous meanings
4)  Vagueness
5) Double negatives
6) Faulty assumptions
7) 2-in-1
8) Impossible to recall accurately
9) Too much detail
10) Sensitive questions
11) Hypothetical questions
12) Leading questions
13) Leading answers
14) Answers with a poor scale
15) Lack of specificity
16) Missing answer options
17) Overlapping answer options

30
Q

The layout of the survey instrument will vary depending on:

A

The mechanism of data collection used

31
Q

What should the layout and formatting be for a written survey?

A

1) The answer sheet should clearly indicate where and how responses should be marked
2) Readable and large font
3) White space (blank areas) for separating sections and making the page visually appealing
4) Very clear instructions for skips should direct interviewers or respondents to jump over sets of non-applicable questions
5) A cover letter or list of instructions that tell respondents how to record their answers

32
Q

A reliable and valid questionnaire:

A

Measures what it was intended to measure in the population being assessed

33
Q

What is reliability (or precision)?

A

Demonstrated when consistent answers are given to similar questions and when an assessment yields the same outcome when repeated several times

34
Q

What is validity (accuracy)?

A

When the responses or measurements are shown to be correct.

35
Q

One aspect of reliability is ____.

A

Internal consistency

36
Q

What is internal consistency?

A

The stability of participants responses (same question is asked in different ways to ensure responses are the same).

37
Q

How can we confirm internal consistency in a data set?

A

By using tests of intercorrelation that assess whether two or more related items in a survey instrument measure various aspects of the same concept.

38
Q

Cronbach’s alpha and the Kuder-Richardson formula 20 (KR-20) are both measures of what?

A

Internal consistency among questionnaire items. (Expressed as a number between zero to one, with scores near one indicating minimal random error and high reliability)

39
Q

What is another facet of reliability?

A

Test-retest reliability.

40
Q

What is test-retest reliability?

A

Demonstrated when people who take a baseline assessment and then retake the test later have about the same scores each time they are tested.

41
Q

Some researchers use the word concept to describe what?

A

Theories informed by observations. (General abstraction)

42
Q

Some researchers use the word construct to describe what?

A

Theories informed from more complex abstractions. (Multidimensional)

43
Q

What is content validity (logical validity)?

A

When a set of survey items captures the most relevant information about the study domain.

44
Q

What does content validity require consideration of?

A

1) The technical quality of the survey items

2) Their representativeness of all the dimensions of the theoretical constructs being measured by the survey instrument

45
Q

What does the principal component analysis method provide?

A

Information about which items in an assessment tool might be redundant or unnecessary and can be removed.

46
Q

What is face validity?

A

Present when content experts and users agree that a survey instrument will be easy for study participants to understand and correctly complete

47
Q

What is construct validity?

A

When a test measures a theoretical construct a test is intended to assess.

48
Q

What is convergent validity?

A

Present went two indicators that should be related are shown to be correlated.

49
Q

What is discriminant validity?

A

When two indicators should not be related and are shown to not be associated

50
Q

What is criterion validate (concrete validity)?

A

Uses an established test as standard (or criterion) for validating a new test that examines a similar theoretical constructs.

51
Q

What are the two main approaches to examining criterion validity?

A

1) Concurrent validity: evaluated when participants in a pilot study complete both the existing and new tests and the correlation between the test results is calculated
2) Predictive validity: appraised when the new test is correlated with subsequent measures of performance in related domains.

52
Q

What is one way we can improve validity?

A

Including survey questions or modules that are identical to the ones used in previous research projects.

53
Q

Several widely used and validated tests are available to researchers, such as:

A

1) The Beck Depression Inventory and the General Health Questionnaire (GHQ): Assesses psychological status.
2) The Mini-Mental State Examination: Evaluates cognitive function
3) The SF-36 and SF-12: Measure health-related quality of life (HRQOL)

54
Q

What is back translation (or double translation)?

A

When one person translates the questionnaire from the original language to the new language, then another person translates back to the original language.

55
Q

A pilot test (or pretest) of the questionnaire is helpful for checking:

A

1) The wording and clarity of the questions
2) The order of the questions
3) The ability and willingness of participants to answer the questions
4) The responses given and whether the responses match the intended types of responses
5) The amount of time it takes to complete the survey

56
Q

The researcher should ask several volunteers to help with the pilot test. These volunteers should:

A

1) Be from the target population NOT the sample population.
2) Meet the eligibility criteria
3) Provide feedback

57
Q

Participation rate will be higher if

A

1) Recruits understand importance and value of the research project
2) You provide multiple invitations and opportunities to participate
3) You make participation as easy as possible
4) Incentives are included (must be approved by ethics committee)