WEEK 3- L6.2 rd- measurement Flashcards

1
Q

purpose of measurement

A

to link theories to the real world
connecting the two levels- turns abstract concepts to indicator variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

concept is..

A

a construct derived by mutual agreement from mental images that summarizes collections of related experiences/observations.
=> defining democracy
should cover multiple attributions/ indicators

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

constructs and indicators- what does “avoid reification of concepts mean”?

A

concepts such as democracy are still abstract even if we identify some common attributes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

conceptual goodness acc. to gerring (8)

A

**1. familiarity- **established usage
**2. resonance- **does it have a cognitive click
3. parsimony- as simple as possible
**4. coherence- **internal consistency (are the different attributes related with one another)
**5. differentiation- **external diferentation/ boundedness
**6. depth- **ability to bundle many different characteristics/ attributes
7. theoretical utility- is it useful for theory building
**8. field utility- **can it capture new entities, reconceptualization without using meaning- and becoming empty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

measurement examples (2)

A
  1. simple concepts (undimensional): age
  2. complec concepts (multidimensional): corruption, democracy, prejudice
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is corruption as a concept?

A

the misuse of position for private gain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what’s the indicators of corruption?

A

perception fo corruption by business people, experience by public, prosection of public officials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what’s the observation of corruption?

A

expert survey, public survey, court records

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what are the steps to conceptualize multidimensional concepts?

A
  1. concept
  2. indicator
  3. observation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

measures should be.. (2)

A

1. unbiased: free of systematic errors = accuracy = validity
2. efficient: low variance and random errors = precision = reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

unbiased means…

A

high validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

efficient means…

A

high reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

being high on reliability means…

A

the dots are very close to eachother, we always get the same results (we can rely on it) but are the results valid? we don’t know

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

being high on validity means…

A

although the dots are not close to eachother, they’re close to the actual relaity. we can’t rely on them that they’re similar but we can say that the results are valid.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

different types of validity and reliability

A

**1. research design
*** internal validity: causal inferences
* external validity: generalizability

**2. measurement
*** measurement reliabilty: consistency and precision
* measurement validity: accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

“measurement reliabiltyand validity are necessary requirements for internal and external validity”

A

correct!

16
Q

measurement error

A

observed measurement = true score + error score
error score: random error+ systematic error

17
Q

types of measurement error, which is always wrong?

A
  1. random error (decreases reliability)
  2. systematic error (always wrong)
18
Q

how can measurement reliability be assesed? (3)

types of measurement reliability

A
  1. test-retest reliability (stabilty over time)
  2. internal consistency (accross different indicators)
  3. intercoder/rater reliability (consistency accross researchers)
19
Q

how can overall reliabilty be reported?

A

for internal consistency: reliability coefficient
ranges from .00 to 1.00
.7 is minimum,.8 is good

20
Q

what are examples of mesurement reliability coefficients?

A

cronbachs alpha and split half method

21
Q

three ways of assesing measurement validity

more difficult bc we dont’ know the real score

A

1. face validity- judgement based (not reported)
**2. content validity **(theory-based): covering all dimensions? deals with intention
3. criterion/ construct validity:
a. concurrent: criteria is available - there’s an external criteria that’s correlated
b. predictive: criteria will be available in the future (SAT?) - there’s an external criteria that’s correlated
c. convergent: correlating with existing measures
*d. discriminant: *doesn’t overlap with theoretically different stuff does your concept distinguish?

22
Q

what are the types of criterion validity?

A
  1. concurrent- predictive
  2. convergent
  3. discriminant
23
Q

when the score is always the same, but it’s wrong

A

reliabilty: yes
validity: none

24
Q

when there’s some random error, but it’s close to the real score

A

reliability: yes with random error
validity: yes

25
Q

is reliability necessary for validity?

A

yes reliability is necessary for validity

26
Q

3 types of triangulation

A
  1. data triangulation- using different sources or measures
  2. investigator triangulation- different researcher
  3. methodological triangulation- using different methods
27
Q

3 possible outcomes of triangulation

A
  1. convergence - same
  2. inconsistency- some differences
  3. contradiction- opposite
28
Q

why should we use triangulation?

A

it help to identify problems- learning process- learning inconsistencies allow to develop better measures

29
Q

principles of data quality

A
  1. transperancy
  2. replication- new data collection
  3. verification- re analysis of existing data
30
Q

(…) is necessary for (…)

A

reliability is necessary for validity

31
Q

what’s face validity

A

the indicator intuitively seems like a goo d measure of the concept

32
Q

what’s content validity

A

the extent to which the indicator covers the full range of the concept, covering each of its different aspects.

33
Q

what’s construct validity

A

examines how well the measure confirms to our theoretical expectations

looks at to what extent its associated with theoretically relevant factors
has 3 types: concurrent- predictive, convergent, discriminatory

34
Q

random error decreases…

A

reliability

35
Q

which error type is always wrong?

A

systematic error