Lecture 10 Flashcards

1
Q

What are the 9 steps of a research proposal

A
  1. introduction
  2. problem statement
  3. hypothesis
  4. literature review
  5. methods
  6. limitations
  7. significance
  8. references
  9. appendix items
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

(blank)-The extent to which a test (or indicator/instrument) accurately measures what it is supposed to – 4 types

A

validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

(blank) Validity
• Degree to which a measure ‘obviously’ involves the performance being measured
• Weakest type of validity
• The test “seems” to be valid
• No quantification about how well the test measures the dependent variable
– e.g., 50 m sprint used to assess running speed
• Taken at face value

A

face/logical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

(blank) Validity
• Degree to which an instrument accurately measures a theoretical construct or trait it was designed to measure
– e.g., depression, anxiety, intelligence
• Used when the dependent variable is difficult to measure and there is no established gold standard
• Often assessed by:
– Correlation
– Known group difference method
• Comparing test scores between groups that should differ

A

construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

(blank) Validity
– Degree to which a measure/test is related to the criterion (gold standard)
– A method to establish the validity of a new test
– Both tests performed on the same sample at the same time (concurrently)
• Body fat: BIA vs. DXA
• CV Fitness: Step test vs. VO2max
• Physical & Mental Health: SF-8 vs. SF-36

A

concurrent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

(blank) Validity
– Degree to which scores of predictor accurately predict criterion (can compare to gold
standard)
– A test is developed to predict a criterion measure
– Correlation between the test and criterion is used to determine validity
• Injury prediction: do scores on
Functional Movement Screen predict injury?

A

predictive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which 2 types of validity are content and criterion related?

A

content related:
face validity
construct validity

criterion related:
concurrent validity
predictive validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

(blank)-Measures the consistency or repeatability of test scores or data.
– Keep in mind that measures can be reliable but NOT valid….BUT measures can never be valid if not reliable

• Methods of establishing
– Stability (test - re-test)
– Alternate Forms (parallel)
– Internal Consistency
– Inter-rater
A

reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

(blank) Reliability
• Same test is administered on two-separate occasions
and the results are correlated
– Test-retest method
• Not good for tests where learning is a performance
factor
• Good to evaluate the measurement skill of a laboratory
device or technician

A

stability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

(blank) Forms
• measures the correlation between two ‘equivalent’
versions of a test.
– You use it when you have two different
assessment tools or sets of questions designed
to measure the same thing.
• If there is a high correlation between the tests,
they can be said to be consistent/reliable

A

alternate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

(blank) Consistency
• Used to show how consistent the scores of a test are
within itself
– Correlation between multiple items in a test intended
to measure the same construct
• The questions within themselves are consistent
• Split-Half Method
– A correlation is performed on the results of two
halves of one test. If they are highly correlated, the
test has internal consistency
• Good for written tests
• Numerous physical performance trials

A

internal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The (blank) method assesses the internal consistency of a test, such as psychometric tests and questionnaires. … This is done by comparing the results of one half of a test with the results from the other half. A test can be split in half in several ways, e.g. first half and second half, or by odd and even numbers.

A

split-half

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

(blank) Reliability

• Test of the objectivity between testers

A

inter-rater

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

• (blank) – Do the same thing twice, is it stable?
• (blank) – If you used another option, are scores
related?
• (blank) – Within itself it is reliable
• (blank) – multiple researchers making observations
or ratings about the same topic

A

stability
alternate
internal consistency
inter-rater

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

• When you step on a scale 1 minute later the number is
the exact same (blank)

• When two people evaluate someone’s performance after
the job interview, they rate on the same scale(blank)

• Whether you complete the Pittsburgh or Edinburgh
depression scale, you get the same diagnosis(blank)

A

stability

inter-rater reliability

alternate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

(blank) correlation (r): relationship between two
variables
• Coefficient values may range from -1 to +1
• where 0 is weak relation and 1 is a perfect relation

17
Q

Positive correlation
– (blank) number on variable X and Y
– E.g. Long jump: relationship between
distance and power test is positive. Why?

18
Q

Negative correlation
– (blank) number on variable X
– (blank) number on variable Y
– E.g. Long jump: Relationship between jumping
distance and running time is almost always negative.
Why?

19
Q
(blank) Coefficients (R)
• comparing two values for 
same variable
• The two scores are 
correlated and the reliability 
coefficient is produced
For example: 
R >.85 (or higher) for maximal 
physical effort tests and precise 
laboratory tests
A

reliability

20
Q
Cronbach’s (blank) α : Internal 
Consistency
–Reliability of 
questionnaire 
items/scales
– α >.70 
(acceptable)
21
Q

(blank) is concerned with getting the right assessment and (blank) is getting the assessment right

A

validity

reliability

22
Q

4 sources of (blank):
– Participants
• Mood, motivation, health, fatigue, prior knowledge ,
familiarity with test
– Testing
• Standardization of all test activities for all participants
– Scoring
• Competence, experience, attention to detail of scorers
(RAs)
– Instrumentation
• Maintenance and calibration

23
Q

While (blank) validity relates to how well a study is conducted, (blank) validity relates to how applicable the findings are to the real world

A

internal

external

24
Q

• (blank)
– Assigning numbers to various levels of a particular
concept
– Provides an indirect measure of the concept of
interest
– e.g. On a scale of 1 – 5 rank your mood
• Used to obtain information on almost any topic, object,
or subject
– Attitude, opinion, behaviour, performance,
perception

25
1. (blank) Scale • Measures degree of agreement or disagreement • Can be considered Ordinal or Interval – every score has a meaning! • 5 or 7-point Likert are most common – Can have up to 9 points • Provide wider choice of expression than yes/no
likert
26
(blank) Differential Scale • Measures attitudes and concepts • Interval score or ordinal – not assigned a meaning or # to each score • Uses bipolar adjectives describing a topic; usually along a 7-point scale
semantic
27
(blank) Scale • Numerical, verbal, checklist or ranking • Items rated by selecting a point on the scale corresponding to their impression of the item
rating
28
(blank) Order Scale • Items ranked, usually in terms of preference or importance • Ordinal scores • Best for ranking 5 to 7 items – Higher numbers of ranking results in less accuracy
rank
29
(blank) Errors • Leniency – Overly generous rating • Central tendency errors – Most ratings in middle of scale • i.e., Avoiding low or high ratings • Halo effect – Previous impressions/knowledge influence ratings • Proximity errors – Rate more characteristics similar when they follow in close proximity • Observer bias error – Rating influenced by personal bias • Observer expectation error – Rating influenced by what you expect to see
rating
30
``` What is (blank)? • “the study of the distribution and determinants of health-related events or disease in specified populations, and the application of this study to the control of health problems” ```
epidemiology
31
``` • Distribution – Frequency • Prevalence: # of existing cases (proportion) – Tells us how much = burden of disease • Incidence: # of new cases (rate) – Tells us how fast something is spreading • Mortality rate: death rate – Patterns: Person, place, time ``` • Determinants – Defined characteristics associated with change in health • Application – Translation of knowledge to practice What are these 3 characteristics of?
epidemiology
32
(blank) measured using the case fatality ratio = case fatality rate = CFR the number of deaths due to a disease as a proportion of the number of people diagnosed with the disease.
virulence
33
(blank) (IFR): The number of individuals who die of the disease among all infected individuals (symptomatic and asymptomatic).
infection fatality ratio
34
difference between prevalence and incidence?
Prevalence refers to proportion of persons who have a condition at or during a particular time period, whereas incidence refers to the proportion or rate of persons who develop a condition during a particular time period.
35
``` Development of (blank) Epidemiology • Early studies – *Framingham Heart Study – *London Busmen/British Civil Servants – Tecumseh Health Study – *Harvard Alumni Health Study – Minnesota studies • More recent health studies – INTERHEART Study – Nurses Health Survey – Canadian Community Health Survey – Canadian Health Measures Survey ```
exercise
36
Purposes of (blank) Methods • Quantifying the magnitude of health problems • Identifying the factors that cause disease • Providing quantitative guidance for the allocation of public health resources • Monitoring the effectiveness of prevention strategies using population-wide surveillance programs
epidemiologic
37
– (blank ) study design: • Describes relationship between basic characteristics and disease states • Useful for developing and crudely testing hypotheses – (observational) study design • The development of disease or health outcome is observed and compared among those that participate in different levels of physical activity. – Levels of physical activity participation are self selected by the individual and not under control of the investigator. – (experimental) study design • Random assignment of physical activity levels to individuals without the disease or health outcome of interest • These individuals are then followed for a period of time to compare their development of the disease or health outcome of interest. Commonly Used (blank) Designs in Epidemiological Studies • Observational study designs: – Cross-sectional – Case-control – Cohort • Experimental study design: – Clinical trial
descriptive observational experimental research