week 6 Flashcards

1
Q

whats validity

A

: do the items of a questionnaire measure the attributes, traits or concepts that they purport to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

whats reliability

A

how consistently do the questionnaire items measure what they claim to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

levels of data collection quant

A
  • Nominal
  • Ordinal
  • Interval (also called continuous)
  • Ratio (also called continuous)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

approaches to data collection

A
  • Usually will attempt to collect the same data from all participants
  • Will try to ask the same question of all the participants
  • Will try to ensure the question is understood in the same way by all participants
  • Will try to minimise the amount of missing data since statistical analyses are more rigorous when there are no or few missing data points
  • Try to ensure all questions are answered and try to obtain as high a response rate as possible
  • Quantitative data collection that deals with behaviours, thoughts, opinions, attitudes or emotions primarily use structured methods of collecting data such as observational rating scales, self report questionnaires and standardised tests
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

data collection approaches quant

A
  1. Standardised tests and performance measures
  2. Contextualised/environmental assessment
  3. Focus groups
  4. Biometric meausres
  5. Document/record review
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

self report surveys

A
  • Self-report measures are typically self-administered and responses are provided in writing.
  • Participants are asked to self-reflect on his/her experience, opinions, thoughts, ideas, attitudes, or needs and then select the best option from a finite number of response categories  closed type of question
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

self report scales

A
  • Self rating scales are used to capture information about such constructs as personality characteristics, attitudes, behaviour patterns & emotions.
  • They can also ask an individual to evaluate themselves directly in terms of their performance.
  • Most common likert scale eg:
    1. Strongly disagree 2. Disagree 3. No opinion 4. Agree 5. Strongly agree
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

semantic differential scales

A
  • Semantic differential scales ask respondents to rate a given concept on a series of bipolar adjectives that are used to characterise one’s feelings, attitudes, or reactions
  • Example: good versus bad
  • Dull versus exciting
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

visual analogue scales

A
  • VAS uses a straight line with labels to anchor each end.
  • Participants are asked to mark the point on the line that corresponds most closely to their experience, opinion, belief or interpretation.
  • VAS typically use a line that is 100 mm in length so that scoring can be accomplished with the use of standard ruler (preferably one that is clear plastic).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Un standardised Questionnaires

A
  • Have no established reliability or validity
  • Are typically home-made, investigator-generated instruments
  • Often there are no existing measures available to measure variables of interest, therefore a new measure must be generated
  • Benefits include their potential to provide rough preliminary information about a wide range of variables
  • Are usually limited to descriptive studies that are preliminary in nature & aim to report general information about a single sample of participants
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

whats pre test

A
  • Pretests represent initial evaluation of one or more aspects of a research design; in this case a survey questionnaire.
  • In the survey research context, this means the administration of a draft of a questionnaire to a group of participants; getting feedback on the questionnaire; and then revising the questionnaire based on the feedback recommendations from participants.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

participants for pre testing

A

o Participants should be reasonably appropriate respondents for the questions under consideration
o If a study is aimed at a particular population, then any members of that similar population should ideally serve as pre-test subjects.
o Example: if the job stress of teachers in rural publicly funded schools is the research topic, then only teachers from regions classified as rural would be recruited.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

sampling for pre test

A

o In rigorous pretesting of a survey instrument, little attention is often not given to sample representativeness; instead an attempt should be made to achieve the broadest range of respondent types possible.
o This is done to ensure the survey instrument will make sense and be useful in understanding all types of respondents in the population especially when it uses a self-report format.
o Goal of pretesting is to improve the research instrument rather than to provide descriptions of the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

summary of pre testing

A
  • Multi-stage process
  • Cumulative process
  • Need to consider sampling in pretest phase as well
  • Ensures higher quality questionnaire
  • Ensures great reliability of participants’ responses
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

pretesting summary

A
  1. Conducting pretests of various aspects of an individual study design & analysis is extremely important
  2. Extensive pretests of each aspect are in order
  3. One should continually be on the alert for the implications of the pretesting of one aspect for others
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

pilot tests is

A
  • Best method of ensuring valid interrelationships is to conduct a pilot study
  • A pilot study is a miniaturised walk-through of the entire study from sampling to reporting
  • Pilot study should involve the administration of a working version of the instrument which is as identical as possible to the one intended for the final survey
17
Q

sampling pilot study

A
  • Unlike a pretest, a pilot study should be aimed at a representative sample of the target population
  • Pilot study sample should be selected in exactly the same way as in intended in the final survey
18
Q

pilot study data collection and data processing

A
  • Data collection and data processing should be a “miniature-walk-through” of the final survey design
  • To the extent that the research instruments are similar, they should be administered exactly as you intend for the final survey
  • The completed pilot-tested questionnaires should be coded, and the data entered, transferred, cleaned, and analysed exactly as planned for the final research.
19
Q

summary of pilot study

A
  • Pilot study allows the examination of the procedures used when completing a survey; it basically is a dry run of doing a larger survey, but instead mimicking it on a smaller scale
  • It helps to identify any potential challenges in the execution of a large scale survey
20
Q

failure to answer

A
  • Typically every respondent skips some questions, and every question is skipped by some one
  • When a given question produces a number of NO ANSWERS, it is a clue that there are problems in the survey design
21
Q

multiple answers

A
  • When respondents are asked to select only one answer from a list of alternatives, some will persist in selecting more than one
  • If one question produces a number of multiple answers, you should suspect that either your answer categories are not mutually exclusive or the question is being misunderstood by respondents
  • The solution to this problem varies with the type of multiple responses
  • If the same two categories are frequently chosen together, perhaps they could be revised to become more distinct or alternatively combined into one answer
22
Q

other answers

A
  • It is frequently appropriate in closed-ended questions to offer the respondent the alternative of volunteering his or her own answer
  • When this option is offered, having a large number of “other” responses indicates that the answer categories provided are not sufficiently exhaustive
  • If the “other” answers fit conceptually with the survey, they should potentially be added to the answer category list
  • You should not add answers to the response list if doing so conflicts with the objectives of the study or particular question
23
Q

qualified answers

A
  • Respondents will often qualify answers with additional comments and input
  • Qualified answers point to a lack of clarity in the questions and warrant a revision of the question if the lack of clarity could result in a different understanding of the question
24
Q

direct comments

A
  • Respondents often point directly to problems in question wording or format such as “this is a lousy question” or “this question is very unclear”
  • You should be alert for questions that generate more than their share of comments
25
Q

variance in responses

A
  • One concern in evaluating survey instruments is the distribution of answers evoked by each question
  • Are the responses evenly distributed among several answer categories or did most respondents select the same answers?
  • In social research, one is often interested in the relationships between variables
  • If no variation exists in responses elicited by a given question, you cannot explain the answers…you cannot explain differences that do not appear in your data
26
Q

relationship between survey items

A
  • Survey items can also be evaluated through examination of their relationships with other items
  • Typically all or most items designed to measure a single variable will be empirically related to one another, but the strength of the relationships between items will vary
  • If one item is very weakly related to the others, you might conclude that it does not really measure the variable/construct and then drop/delete it from the questionnaire
27
Q

whats a high correlation mean

A

very high correlation between two items might suggest that including both items is not necessary. Deleting one or more redundant items would save space and place less demands on the respondents.

28
Q

whats four steps of coding or data reduction (BOOK)

A
  1. Designing the code
  2. Coding (the process of turning responses into standard categories)
  3. Data entry
  4. Data cleaning (doing a final check on the data file for accuracy, completeness