chapter 6 textbook Flashcards
(28 cards)
how could you measure loyalty (5 steps)
Step 1: have to identify the concept of interest
Step 2: develop a c construct
Step 3: define the concept constituively
Step 4: define the concept operatioally
Step 5: develop a measaruemtn scale
Step 6: evaluate the reliability and vadiity of the measuremtn scale
Step 1: have to identify the concept of interest
A con- cept is an abstract idea generalized from particular facts. It is a category of thought used to group sense data together “as if they were all the same.
Step 2: develop a c construct
Constructs are specific types of concepts that exist at higher levels of abstraction than do everyday concepts.
Not directly observable and you infer them from an indirect method
Step 3: define the concept constituively
A constitutive (or theoretical, or conceptual) definition is a statement of the meaning of the central idea or concept under study, establishing its boundaries.
Step 4: define the concept operatioally
operational definition: A statement of precisely which observable characteristics will be measured and the process for assigning a value to the concept.
, it assigns meaning to a construct in terms of the operations necessary to measure it in any con- crete situation
Step 5: develop a measaruemtn scale
A scale is a set of symbols or numbers so constructed that the symbols or numbers can be assigned by a rule to the individuals (or their behaviours or attitudes) to whom the scale is applied
Nominal level: Scales that partition data into mutually exclusive and collec- tively exhaustive categories. The term nominal means “name-like,” indicating that the num- bers assigned to objects or phenomena are naming or classifying them but have no true number value; that is, the numbers cannot be ordered, added, or divided
Ordinal level : Scales that maintain the label- ling characteristic of nominal scales and have the ability to order data.
Interval leve: Scales that have the characteristics of ordinal scales, plus equal intervals between points to show relative amounts; they may include an arbitrary zero point.
Ratio level: Scales that have the character- istics of interval scales, plus a meaningful zero point so that magnitudes can be compared arithmetically.
Step 6: evaluate the reliability and vadiity of the measuremtn scale
An ideal marketing research study would provide information that is accurate, precise, lucid, and timely.
m= a + e
a= accuracy
e= errors → can be systematic or random. Systematic error results in a constant bias in the measurements, caused by faults in the measurement instrument or process. Random error also influences the measurements but not systemati- cally. Thus, random error is transient in nature.
Reliability
A measurement scale that provides consistent results over time is reliable
Thus, reliability is the degree to which measures are free from random error and, therefore, provide consistent data.
The less error there is, the more reliable the observa- tion is, so a measurement that is free of error is a correct measure
Test retest reliability
Test–retest reliability is obtained by repeating the measurement with the same instru- ment, approximating the original conditions as closely as possible. The theory behind test– retest is that if random variations are present, they will be revealed by differences in the scores between the two tests
stability means that very few differences in scores are found between the first and second administrations of the test; the measuring instrument is said to be stable.
There are several problems with test–retest reliability. First, it may be very difficult to locate and gain the cooperation of respondents for a second testing. Second, the first measurement may alter a person’s response on the second measurement. Third, envi- ronmental or personal factors may change, causing the second measurement to change
Equivalent form of reliability
The ability of two very simi- lar forms of an instrument to produce closely correlated resul
There are two problems with equivalent forms that should be noted. First, it is very difficult, and perhaps impossible, to create two totally equivalent forms. Second, if equivalence can be achieved, it may not be worth the time, trouble, and expense involved.
Internal consistency reliability
Internal consistency reliability assesses the ability to produce similar results when different samples are used to measure a phenomenon during the same time period
Split half technique: A method of assessing the reli- ability of a scale by dividing the total set of measurement items in half and correlating the results
Validity
The degree to which what the researcher was trying to measure was actually measured.
A scale or other measuring device is basically worthless to a researcher if it lacks validity because it is not measuring what it is supposed to.
Face validiidty
The degree to which a measurement seems to measure what it is supposed to measure.
Weakest form
Content validity
Representativeness, or sampling adequacy, of the content of the measurement instrument.
In other words, does the scale provide adequate coverage of the topic under study?
Judment matter
Criterion related validity
Criterion-related validity examines the ability of a measuring instrument to predict a variable that is designated a criterion.
To sub categories: predictive, concurrent
Predictive : The degree to which a future level of a criterion variable can be forecast by a current measure- ment scale.
Concucrrenta; The degree to which another variable, measured at the same point in time as the variable of interest, can be predicted by the measurement instrument.
Construct validity
The degree to which a measure- ment instrument represents and logically connects, via the underlying theory, the observed phenomenon to the construct.
Two statistical measures of construct: convergent and discirminant
Convergent: The degree of correlation among different measurement instru- ments that purport to measure the same construct.
Discirminant : A measure of the lack of association among constructs that are supposed to be different.
Convergent validity checks that different tests for the same thing agree.
Discriminant validity checks that tests for different things don’t overlap.
Scaling
Procedures for assigning numbers (or other symbols) to properties of an object in order to impart some numerical characteristics to the properties in question.
Scales are either unidimensional or multi-dimen- sional.
Unidimensioanl
Scales designed to measure only one attribute of a concept, respondent, or object.
Multi:
Scales designed to measure several dimensions of a concept, respondent, or object.
Rank order scales
Rank-order scales, on the other hand, are comparative scales because the respon- dent is asked to compare two or more items and rank each item. Rank-order scales are widely used in marketing research for several reasons. They are easy to use and give ordinal measurements of the items evaluated.
Rank order scales have disavantagaes: If all of the alternatives in a respondent’s choice set are not included, the results could be misleading. For example, a respondent’s first choice on all dimensions in the eye shadow study might have been Maybelline, which was not included. A second problem is that the concept being ranked may be completely outside a person’s choice set, thus producing meaningless data. Perhaps a respondent doesn’t use eye shadow and feels that the product isn’t appropri- ate for any woman. Another limitation is that the scale gives the researcher only ordinal data. Nothing is learned about how far apart
If not all possible options are included, the rankings can be misleading.
Sometimes the concept being ranked might not even apply to a respondent, leading to invalid or meaningless data.
Rank-order scales only give an ordinal measurement, which means we know the order of preferences but not how much more one item is preferred over another
Q sorting
A measurement scale employing a sophisticated form of rank ordering using card sorts
Paired comparisons
Measurement scales that ask the respondent to pick one of two objects in a set, based on some stated criteria.
Paired comparisons overcome several problems of traditional rank-order scales. First, it is easier for people to select one item from a set of two than to rank a large set of data. Second, the problem of order bias is overcome; there is no pattern in the ordering of items or questions to create a source of bias
On the negative side, because all possible pairs are evaluated, the number of paired comparisons increases geometrically as the number of objects to be evaluated increases arithmetically. Thus, the number of objects to be evaluated should remain fairly small to prevent inter- viewee fatigue.
Constant sum scales
Measurement scales that ask the respondent to divide a given number of points, typically 100, among two or more attributes, based on their importance to him or her.
A major disadvantage of this scale is that the respondent may have difficulty allo- cating the points to total 100 if there are a lot of characteristics or items. Most research- ers feel that 10 items is the upper limit on a constant sum scale.
Semantic difefreintial scales
Measurement scales that examine the strengths and weaknesses of a concept by having the respondent rank it between dichotomous pairs of words or phrases that could be used to describe it; the means of the responses are then plotted as a profile or image
The semantic differential is a quick and efficient means of examining the strengths and weaknesses of a product’s or company’s image versus those of the competition.
Dussvantages: suffers from a lack of satndardization. Number of dovusions on the scale can be an issue. Halo effect . occasional lack of universally accepted bipolar adjective