Semester One Final Exam Flashcards Preview

Statistics > Semester One Final Exam > Flashcards

Flashcards in Semester One Final Exam Deck (62)
Loading flashcards...
1
Q

What are ways to maximize reliability

A
  • Standardize measurement methods
  • Train and test observers
  • Refine and Automate Instruments
  • Blind to reduce differential bias across groups
2
Q

What are ways to maximize validity

A
  • Random Assignment
  • Use of Control Groups
  • Blinding (can prevent compensatory reactions and imitation of treatment)
  • Selection of Homogeneous Subjects
  • Blocking (build extraneous attribute variables into design by using them as independent variables)
  • Matching (Age, Sex, Etc)
  • Using subjects as their own control
  • Analysis of Covariance
3
Q

What are sources of knowledge

A
  • Tradition
  • Authority
  • Trial and Error
  • Logical Reasoning
  • Scientific Method
4
Q

Define Inductive and Deductive reasoning

A
  • Inductive Reasoning - Specific to general

- Deductive Reasoning - General to specific

5
Q

What are the four steps for identifying the research question

A
  • From a problem area, identify the research problem
  • Justification for the research problem
  • Develop the research question
  • State hypotheses
6
Q

Why should theories be Economical or “Parsimonious” as well as important

A
  • Economical (Parsimonious) – Explain the most with the fewest variables. If two variables explain 75% of the problem, but adding 5 more variables only explains an additional 3%, it would be parsimonious to stick with just the initial two.
  • Important - “so what”. Should be significant for those who will use it
7
Q

What are some ways we can ensure protection of human rights

A
  • Beneficence: the obligation to attend to the well-being of individuals
  • Justice: refers to fairness in the research process; fair selection of subjects to equally distribute the benefits and burdens
  • Use of control groups when effective therapeutic methods exist may present problem
8
Q

What are continuous variables

A
  • Can take on any value along a continuum within a defined range
  • Infinite fractional values can occur
  • Accuracy of measurement relies upon precision of measuring tool
9
Q

What are Discrete variables

A
  • Described only in whole units

- Example: Heart beat…not measure in half beats

10
Q

What are Dichotomous variables

A
  • When a variable can take on only two values….it is described as being dichotomous
  • Example: Male/Female
11
Q

What is a Nominal Variable

A
  • Objects or people are assigned to categories according to some criterion
  • Categories coded by name, number, etc although none of the categories has any quantitative value
  • Categories are used purely as labels
  • Example: Blood Type, Handedness, Side of Hemiplegic involvement
12
Q

What is an Ordinal Variable

A
  • Categories are rank ordered on the basis of an operationally defined characteristic
  • Data organized into adjacent categories with a greater than less than relationship, one is greater or less than the other
  • May have unequal intervals
  • Example: None
13
Q

What is an Interval Variable

A
  • Rank order characteristic similar to Ordinal
  • Unlike ordinal, demonstrates equal distances or intervals
  • Example: Degrees in C or F
14
Q

What is a Ratio Variable

A
  • Interval Scale with a true zero
  • Therefore, no negative values
  • Measurement of zero represents complete absence of property being tested
  • Example: ROM, Height, Weight
15
Q

What are some issues with measuring constructs

A
  • Constructs are not easily defined and as such are difficult to measure
  • Example: Intelligence is not determined based on one measurement of reading ability or memory but is more complex
16
Q

Discuss the importance of reliability in clinical measurement

A
  • Extent to which a measurement is consistent and free from error
  • Measurements have to be reliable to be valid. No validity without first reliability
  • Reliability is fundamental to all aspects of measurement, without it we cannot have confidence in the data we collect or draw conclusions from those data
17
Q

Explain the concept of measurement error

A
  • The difference between the true value and the observed value
  • Related to the “noise” that gets in the way of our finding the true score
18
Q

Distinguish between random and systematic error

A
  • Systematic Errors: Predictable errors of measurement, consistently overestimate or underestimate
  • Random Errors: Due to chance and can affect a subjects score in an unpredictable way from trial to trial
19
Q

Identify typical sources of measurement error

A
  • Rater - Person taking measurement
  • Instrument
  • Variability of characteristic being measured
20
Q

Describe the effect of regression toward the mean in repeated measurement

A
  • Measurement errors are random and therefore normally distributed and will equal out over time
  • Extreme scores on a pretest will move closer or regress toward the group average
21
Q

Discuss how concepts of agreement and correlation relate to reliability

A
  • Correlation reflects the degree of association between two sets of data but does not tell you anything about agreement
22
Q

Define and give an example of intra-rater reliability

A
  • Scores should match when the same examiner tests the same subjects on two or more occasions
23
Q

Define and give an example of inter-rater reliability

A
  • 2 or more examiners test the same subjects for the same characteristic using the same measure, scores should match
24
Q

Define and give an example of test retest reliability

A
  • Used to establish that an instrument is capable of measuring a variable with consistency
  • A test is administered to the same group of subjects on more than one occasion; scores should match
25
Q

Discuss how generalizability theory influences the interpretation of reliability

A
  • An individual score can be thought of as a sample from a universe of possible scores that might have been obtained under the same testing conditions
  • Single measurement becomes the best estimate of a true score under those conditions
  • Reliability then interpreted in relation to a set of specific testing conditions
  • Error is divided into components (facets)
26
Q

Relate reliability to the concept of minimal detectable difference

A
  • When we measure a difference between two scores, some portion of that change may be due to error and some portion may be real
  • Minimal detectable change is used to define the amount of the change in the variable that must be achieved to reflect a true difference
27
Q

What are some ways to improve reliability of a measurement

A
  • Standardize measurement methods
  • Train and test observers
  • Refine and Automate Instruments
  • Blind to reduce differential bias across groups
28
Q

What is the research question

A
  • Asks “What does the study plan to answer?”
  • Example: Do runners using a non rearfoot strike pattern (NRFS) have fewer injuries than runners using a rearfoot strike pattern (RFS)?
  • NOT Is a NRFS pattern a better way to run? Define “better”…..cant answer that question
29
Q

What is a theoretical framework or rationale and how does it form the framework for a research question

A
  • presents a logical argument that shows how and why the question was developed
  • Will support the research question, guide decisions in designing the study, and provide basis for interpreting results
30
Q

What is an operational definition

A
  • Defines a variable according to its unique meaning within a study
  • Should be sufficiently detailed so that another researcher could replicate the procedure or condition
  • Should differentiate the various levels of the variable
31
Q

What is an independent variable

A
  • What you manipulate or specify, also called factors

- Will predict or cause a given outcome

32
Q

What is a dependent variable

A
  • What you measure

- a response or effect that is presumed to vary depending on the dependent variable

33
Q

What are the characteristics of good research hypotheses

A
  • Must be testable and based on a sound rationale
34
Q

What are directional and non directional research hypotheses

A
  • Non directional describe the relationship between variables
  • “There is a difference between A and B”
  • Directional not only describes relationship but also assigns a direction to that difference
  • “A is greater than B”
35
Q

Distinguish between primary and secondary sources of information

A
  • Primary source is a report or document provided directly by the person who authored it
  • Secondary source is a description or review of one or more studies presented by someone other than the original author
36
Q

Distinguish between populations and samples

A
  • The larger group to which results are generalized is the population
  • Through a process of sampling, a researcher chooses a subgroup of the population used as a reference group for estimating characteristics or drawing conclusions about the population - This group is the sample
37
Q

Define the concepts of sampling bias and sampling error

A

Sampling Bias - Extent that sample systematically misrepresents population—-Can be conscious or unconscious
Sampling Error - Extent that sample randomly misrepresents population

38
Q

Summarize the difference between target and accessible populations

A
  • Target population: Universe of interest; Overall group of people to which the researcher intends to generalize the findings of the study
  • Accessible population: Portion of the target population that has a chance of being selected
39
Q

Describe the purpose of inclusion and exclusion criteria in sampling for research studies

A
  • Inclusion Criteria describe the primary traits of the target and accessible populations that will qualify someone as a subject
  • Exclusion Criteria indicate the factors that would preclude someone from being a subject
40
Q

Contrast different types of probability and non-probability sampling procedures in terms of generalizability

A
  • Probability sampling is through simple random selection giving every member of a population of an equal opportunity or probability of being selected
  • Non probability samples are made by nonrandom methods. This limits the ability to generalize the outcomes beyond the specific sample studied.
41
Q

Discuss issues in recruiting an adequate sample size

A
  • Must consider the effect of sample size on their analytic process, sometimes the restrictions set for inclusion and exclusion criteria must be modified to obtain a large enough sample, and the implications of this process must be considered
  • In Qualitative studies, samples may be small, as compared to quantitative studies that will require statistical comparisons
42
Q

What is PICO and what is its importance

A
  • Population
  • Intervention
  • Comparisons
  • Outcomes
  • These can be important to emphasize when searching the literature for articles relevant to your clinical question
43
Q

What Three criteria must be satisfied to constitute authorship

A
  • Substantial contribution to conception and design OR acquisition of data OR analysis and interpretation of the data
  • Drafting the manuscript OR reviewing critically for important intellectual content
  • Final approval of the version to be published
44
Q

Describe Honorary Authorship

A
  • Also known as “gift or guest” authorship
  • A listed author who has not contributed at all, or not contributed significantly to authorship criteria
  • Inclusion based on belief that presence will add credibility and will assist in journal publication OR
    “Gift” to seek favor, provide payback, etc…
45
Q

Describe Coercion Authorship

A
  • A form of honorary authorship not initiated by the primary author
  • Senior member of lab or team pressures openly or through subtle coercion*
    Either way: IT’S NOT RIGHT!
46
Q

Describe Ghost Authorship

A
  • An individual makes substantial contributions to the work and they are not listed as an author

Happens When:

  • Industry does not want to be associated with the work and acknowledge the conflict of interest
  • Someone leaves the project early
  • Junior members are left out in the cold*
47
Q

What are the 5 Main paper sections of the Consort Statement

A
  • Title and Abstract
  • Introduction
  • Methods
  • Results
  • Discussion
48
Q

What is the sub-cateogory in the Introduction section of the Consort Statement

A
  • Background
49
Q

What are the sub-catergories in the Methods section of the Consort Statement

A
  • Trial Design
  • Participants
  • Interventions
  • Outcomes
  • Sample Size
  • Randomization - Sequence generation
  • Randomization - Allocation concealment
  • Randomization - Implementation
  • Blinding
  • Statistical methods
50
Q

Discuss the testing effect and how it can change the reliability of a measure

A
  • When the test itself is responsible for observed change in a measure variable, the change is considered the testing effect
  • Such effects can be manifested as systematic error creating consistent changes across all subjects
51
Q

What is measurement validity

A
  • Concerns the extent to which an instrument measures what it is intended to measure
52
Q

What is Face Validity

A
  • Indicates that an instrument appears to measure what it is supposed to and that it is a plausible method for doing so
  • No method of judging “how much” of it an instrument has, face validity is an all or none type of validity
53
Q

What is Content Validity

A
  • Indicates that the items that make up an instrument adequately sample the universe of content that defines the variable being measured.
  • Most useful with questionnaires and inventories
54
Q

What is criterion related validity

A
  • Indicates that the outcomes of one instrument, the target test, can be used as a substitute measure for an established reference standard criterion test
  • Can be tested as concurrent or predictive validity
55
Q

What is Concurrent validity

A
  • Establishes validity when two measures are taken at relatively the same time
  • Most often used when the target test is considered more efficient than the gold standard and, therefore, can be used instead of the gold standard
56
Q

What is Predictive validity

A
  • Establishes that the outcome of the target test can be used to predict a future criterion score or outcome
57
Q

What is construct validity

A
  • Establishes the ability of an instrument to measure an abstract construct and the degree to which the instrument reflects the theoretical components of the construct
58
Q

What is convergent validity

A
  • A method of construct validation
  • indicates that two measures believed to reflect the same underlying phenomenon will yield similar results or will correlate highly
59
Q

What is Discriminant validity

A
  • A method of construct validation
  • indicates that different results, or low correlations, are expected from measures that are believed to assess different characteristics
  • Opposite of convergent……you would expect this to NOT correlate highly with a test that measures a different construct from the one you are trying to measure
60
Q

What are the issues interpreting change scores with Nominal, Ordinal, Interval, and Ratio variables

A

Nominal- Cannot be subtracted and thus cannot demonstrate change scores

Ordinal- The greatest risk of misinference is with ordinal data because distance between intervals is not known and may not be equal

Interval- Presents a problem for evaluating change because although we can determine distance of change, may not know the true amount of change

Ratio- Only way to measure true change because all measures are known quantities

61
Q

What is responsiveness of the instrument

A
  • The instruments ability to detect minimal change over time
62
Q

Explain the concept of Minimally Clinically Important Difference

A
  • The smallest distance in a measured variable that signifies an important rather than trivial difference in the patients condition
  • Also defined as the smallest difference a patient would perceive as beneficial and that would result in a change in the management of the patient