T3 Slide W5 Flashcards Preview

Evidence Based Approach > T3 Slide W5 > Flashcards

Flashcards in T3 Slide W5 Deck (53)
Loading flashcards...

The Measurement Process

  • What is the point of research if it can't be measured
  • Measurement is the assignment of values to outcomes
  • How do we measure height?


Principles of measurement in Research - 3 ideas

  1. an outcome variable belongs to one of four levels of measurement (Nominal, Ordinal, Interval, and Ratio)
  2. The qualities of one level, are also characteristic of the next level
    • e.g., ratio measures such as height also capture ordinal information 
  3. The higher the level, the more precise the measurement process, and closer you will be to the true outcome of interest.


Levels of Measurement

  • The relationship between what is being measured and the numbers that represent what is being measured
  • Every variable must be operationally defined:


Variables are Categorical or Continuous

  • Categorical
    • Names are distinct entities
    • Simplest form is binary variable; can only go in one of two categories. eg male v female
    • Continuous Variable
  • Can take on any value on the measurement scale. eg: time on a stopwatch


Levels of measurment in order of complexity

  • Nominal
  • Ordinal
  • Interval
  • Ratio


Nominal Variable

  • Nomin = name
  • Differ in quality rather than quantity
  • Characterises observations in a manner where they can only be placed in one category eg: eye colour
  • May be given names or numbers but these have no intrinsic value. such as with NRL Jerseys
  • Most IV's are nominal


Ordinal Variable

  • Like nominal they permit classification tell us the order in which things have occurred
  • Ordinal scales have no absolute zero point. ie: Horse racing
  • Imply nothing about how much greater one ranking is than another


Interval Variable

  • Equal intervals on the scale represent equal differences in the value measured
    • eg: temperature, although equal, be sure to consider interpretation of values along the scale.


Ratio Variables

  • Ratio meaning calculation
  • Build on interval but also requires the ratios of values are meaningful
  • Requires a true zero point not an arbitrary one


Continuous variables are continuous or discrete

  • Continuous = any level of precision such as time
  • Discrete = certain defined values such as number of children in a family


Categorical - Distinct Category

  1. Nominal Variable - more than two
  2. Ordinal variable - Same as nominal but a logical order ie: fail, pass, credit, distinction, high distinction.


Continuous - Distinct Score

  1. Interval variable - equal entities represent equal difference
  2. Ratio variable - Same as interval but scores are meaningful ie: 50kg is twice as heavy as 25kg


Levels of Measurement and complexity

  • Nominal - Categorie
  • Ordinal - orders
  • Interval - meaningful distance
  • Ratio - absolute zero


Principles of measurement in research

  1. An outcome variable belongs to either nominal, ordinal, interval and ratio
  2. Characteristic of the next level eg: ratio measurements such as height also capture ordinal information
  3. The higher the level the more precise the result and closer you will be to the true outcome of interest


Principles of Measurement in Research - Points to Ponder

  • More information increases the power and utility of your results
  • Sometimes you will be limited to what is available to you
  • Always define your variables in ways that maximise the use of your information
  • In behavioural and social sciences most data is usually nominal or ordinal however test scores yield interval level data
  • How you choose to measure an outcome defines the level of measurement
  • Variables may not completely fit this rigid framework in the real world


Reliability and Validity

  • You're only as good as your tools
  • You can have a great research question but will not succeed if your tools are unreliable
  • The consistency and validity of a measurement tool are critical to good research
  • Faulty tools lead to errors in accepting or rejecting the null hypothesis



  • When measuring we assume that there will be a discrepancy found
  • The True value of measurement
  • Reliability decreases as error increases
  • Reliability = True Scories

                        True Score + Error


Ways to increase measurement reliability

  • Increase number of items or observations
  • Eliminate ambiguity
  • Standardise conditions
  • Moderate difficulty
  • Minimise effects of external events
  • Standardise instructions and Standardise scoring


How to measure reliability

  • We use correlation; a measure of relationships between things
  • We can calculate a number that provides a gauge of relationship direction and strength
  • Called Correlation Coefficient


Correlation Coefficient

  • This is a measure of the direction and extent of the relationship between two sets of scores.
  • Range of a correlation coefficient is from -1 to +1


Pearson's r

  • Pearson's product moment correlation coefficient
  • This coefficient will provide a gauge of how similar scores on a test are from time 1 to time 2
  • This is one form of reliability


Types of Reliability - Test-Retest

  • A Measure of stability; how stable is a test over time,
  • Measuring the same individuals at two points in time


Operational Definition 

The operational definition of a variable is the specific way in which it is measured in that study


Name the different types of Reliability (4)

  • Test-Retest
  • Parallel Forms
  • Interrater
  • Internal Consistency


Types of Reliability - Parallel Forms

  • Different forms of the same test given to the same group of participants
  • You might see this in a practice effects test



Types of Reliability - Interrater

  • Evidence of reliability when multiple raters agree in their observations of the same thing
  • Rater to Rater, rather than time to time

eg: observational research



Types of Reliability - Internal Consistency

  • Uses responses at only one time
  • Focusses on consistency of items


Types of Reliability

  • Test-Retest
  • Parallel Forms
  • Interrater
  • Internal Consistency


Measuring what we intend to  . . . 

  • Our measues should be reliable and valid
  • Validity refers to the results of the test
  • It is never all or nothing
  • Validity of the results interpreted in teh context where the test occurs
  • Are the results understood within the context of the purpose of research?


Name the Types of Validity

  • Face Validity
  • Content Validity
  • Criterion-related Validity
  • Construct Validity