Study Group - Evaluation & Research Flashcards

(217 cards)

1
Q

What are nonintervention costs of programs?

A

Resource costs that are part of intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the parameters for addressing utilization of findings?

A
  • Study design
  • Preparation
  • Possible feedback
  • Follow through
  • Information distribution
  • Any possible further uses of information
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Attributes for Evaluation Recommendations

A
  • Defensible
  • Timely
  • Realistic
  • Targeted
  • Simple
  • Specific
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the cost expenditure categories in CEA?

A
  1. Developmental
  2. Production
  3. Implementation
  4. Evaluation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Cost-Minimization Analysis (CMA)

A

type of CEA which program A & program B have identical outcomes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does cost-effective analysis help decision makers?

A

allocation of limited resources & still achieve desired health benefits

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Where are evaluation indicators created from?

A

Logic Model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Accuracy (evaluation standard)?

A

Provide accurate information for determining merits of program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Construct Validity

A

Whether specific measure of concept is associated with 1+ measures that is consistent with theoretically derived hypotheses

*- How accurately inferences about specific features of program reflect constructs
- Underlying theory is correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Meta-Analysis

A

Quantitative technique for combing results from multiple, different evaluations on same topic

  • Could provide information as to whether findings are strong over variations of populations,, settings, programs & outcomes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are intervention costs of programs?

A

All resources used in delivery of intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Translational Research

A

Studying & understanding progression of “bench-to-bedside-to-population”

  • How scientific discoveries lead to effectiveness & efficacy in studies which lead to dissemination into practice
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

_____________ & _____________ aid in program cost effectiveness (in addition to types of analyses).

A
  1. Cost effectiveness ratio
  2. Value threshold
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Types of Evidence (most to least rigorous)

A
  1. Systematic reviews & meta-analysis
  2. Scientific literature
  3. Public health surveillance data
  4. Program evaluations
  5. Reports from community members & other stakeholders (e.g. needs assessment)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does Implementation Science identify?

A

Identifies factors, processes, & methods that increase likelihood of evidence-based interventions to be adopted & used to sustain improvement in population health

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is external use?

A

Benefits decision makers & administrators not connected with program by considering program in different setting & how to change similar program that is not performing well

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How data is managed is dependent on what?

A
  • Type of data
  • How data is collected
  • How data is used throughout project lifestyle
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Ordinal measurement & give an example

A

Provides information based on order, sequence, or rank

  • scale from strongly disagree to strongly agree
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Impact Evaluation

A
  • Focuses on ultimate goal, product, or policy
  • Often measured in terms of HEALTH STATUS, MORBIDITY, & MORTALITY
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How long is data stored according to security rule?

A

5-10 years

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the difference between qualitative & quantitative data?

A

Qualitative - describes what is occurring or why is occurring (non-numerically)

Quantitative - Numerical data that describes what is happening

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What does data screening (found in data analysis plan) allow/tell evaluator/researcher?

A
  • Assesses accuracy of data entry
  • How outliers & missing values will be handled
  • If statistical assumptions are met
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What processes can quantitative & qualitative data be useful?

A
  • Program planning
  • Implementation
  • Evaluation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Guidelines for Developing Recommendations

A
  1. Invest time
  2. Start early
  3. Consider all issues as fair game
  4. Case wide net
  5. Work closely with decision makers & program staff
  6. Decide whether recommendations should be as general or specific as possible
  7. Consider program context
  8. Consider program closure
  9. Describe expected benefits/costs
  10. Decide whether change should be incremental vs fundamental
  11. Avoid recommending another evaluation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is feasibility (evaluation standard)?
Conduct evaluations that are VIABLE & REASONABLE
26
What are delimitations?
Parameters or boundaries placed on study by researchers that help manage scope of study
27
What is a purpose statement?
- Tool to identify what is to be learned from evaluation and/or research - Serves to focus & steer collection & analysis of data
28
Longitudinal Design
data about program collected at 2+ POINTS IN TIME
29
Hawthorne Effect
If people in intervention group become sensitive to repeated introduction/removal of intervention * Threat to Internal Validity
30
Implementation Evaluation
Comprehensive, retrospective determination of extent to which program was delivered as designed & whether variations may have held significant implications for effects of program * AKA Process evaluation *
31
What is Instrumental Use?
Decision makers change program (expand to other sites, terminate, change how it is implemented) based on answers to evaluation questions
32
Steps of Effective Evaluation
1. Defining research population 2. Identifying stakeholders & collaborators 3. Defining evaluation objective 4. Selecting research design that meets evaluation objective 5. Selecting variables for measurement 6. Selecting sampling procedure 7. Implementing research plan 8. Analyzing data 9. Communicating findings
33
Procedural Equity
Maximizing fairness in distribution of services across groups
34
Descriptive Design vs Explanatory Design
Descriptive: DESCRIBE events, activities, or behavior that occurred (what went on in program) Explanatory: EXPLAINS events, activities, or behavior that occurred (improve understanding)
35
What is maturation?
before-after changes due to changes occurring inside people rather than program
36
Substantive Equity
Minimizing disparities in distribution of health across groups or different populations
37
What is a type V error?
- Reporting intervention has statistically effect but effect is too small - No significance to decision makers
38
What are problematic outliers?
Outliers not representative of population
39
Bivariable Analysis
Determines whether variables in database are correlated with each other - Compares 2+ groups to see whether a characteristic is similar/different - Find out whether program outcomes are significantly different between 2 groups OR 1 group overtime (impact evaluation)
40
Ratio measurement & give an example
Common measurement between each score & have true zero - height, weight, age, etc.
41
What are the different types of evaluation designs?
1. one group posttest only 2. one group pre- & posttest 3. Comparison group posttest only 4. two group pre- & posttest 5. one group time series 6. Multi-group time series 7. two group retrospective (case control) 8. two group prospective (cohort) 9. two group pre- & posttest with random assignment (RCT)
42
Test-Retest
Same measurement administered at 2 points in time
43
What should be tested/assessed for when considering using existing data collection instruments? Why?
Literary reading level (using or adapting) to ensure validity of responses
44
What are intangible benefits?
Non-monetary, subjective, or difficult to measure gains attributable to program intervention
45
Cost-Effective Analysis (CEA)
Determines differences between 2 programs based on what it costs for delivery of the programs - Relationship b/w program cost (input) & impact (output)
46
Internal Reliability/Consistency
Consistency measuring multiple/all items it is meant to measure
47
What analyses provides information on program cost effectiveness?
1. Cost-benefit Analysis (CBA) 2. Cost-minimization Analysis (CMA) 3. Cost-utility Analysis (CUA) 4. Sensitivity Analysis
48
Evaluations should be __________________
Useful, feasible, ethical, accurate, & accountable
49
History (threat to internal validity)
before-after changes is due to other factors in the environment rather than the program
50
IRB
Group of individuals that review potential research proposals that involve human subjects/participants * Approval must be granted prior to beginning data collection* Institutional Review Board
51
Formative Evaluation
Conducted before program begins - designed to produce data & information used to improve program during developmental phase - Documents appropriateness & feasibility of program implementation - ensure fidelity of program
52
Causal Inference
Intellectual discipline that considers assumptions, study designs, & estimation strategies - Allows researchers to draw conclusions based on data
53
What data should be demonstrated/included when using it for policy analysis?
- Burden of health of public - Priority over other issues - Pertinence at local level - Interventional benefits - Personalization of issue by using stories about how lives as impacted - Estimated intervention costs
54
What are the characteristics of indicators to ensure credibility?
1. Clearly linked to intervention outcome 2. Presented in specific, measurable terms 3. Appropriate for population being served 4. Feasible given data collection, resources, & skills 5. Valid & reliable to stakeholders
55
Data collection must be _____________ by decision makers & stakeholders
Relevant
56
What are the research/evaluation errors HES should be able to identify?
1. Sampling errors 2. Lack of precision 3. Variability of measurement 4. Selection bias 5. Instrumental bias 6. Internal threats to validity
57
Embedded Design
Either qualitative or quantitative has priority or is more vital for answering main question of evaluation
58
What is a type III error?
Rejecting program as ineffective when program was never implemented as intended or technology flaws undermined program effectiveness
59
Summative Evaluation
Evaluation occurs after program has ended - designed to produce data on program's efficacy or effectiveness during implementation - Provides data on extent of achievement of goals regarding learning experience
60
What should evaluator consider when choosing evaluation design?
1. Causality 2. Bias 3. Retrospective vs Prospective 4. Time span 5. Finances 6. Current political climate 7. # of participants 8. Type of data being collected 9. Data analysis & skills 10. Access to group to use for comparative purposes 11. Possibility to distinguish b/w exposed & unexposed to program intervention 12. Type of outcome being evaluated (unbound vs bound)
61
Considerations for data collection implementation
1. Find reliable, trustworthy, & skilled people to collect, enter, analyze, & manage data 2. Define roles, responsibilities, & skills needed 3. Monitor data collection 4. Maintain integrity of data collected 5. Ensure protocols address quality control measures
62
Dissemination
spreading information widely - new publications take 17 years to be widely implemented
63
Interval measurement & give an example
Common unity of measurement with no true zero - Temperature
64
What are the advantages to using existing data collection instruments?
- Previously tested for reliability & validity - Direct comparison measures - Reduced cost - User familiarity
65
Interrater Reliability
Correlation between different observers at same point in time
66
SWOT Analysis
Assesses internal & external environment
67
Convergent Design
basic steps in evaluation process implemented independently at same time
68
What is missing data?
Observations that were intended to be made but were not
69
What does HIPPA protect?
All information in health records, billing, & conversations among individuals & healthcare providers
70
Continuous Quality Improvement (CQI)
Tool to reduce costs while improving quality of services - enhances organizational effectiveness
71
What are direct costs of programs?
All goods, services, & other resources used to deliver intervention
72
Internal Validity
Degree program caused change that was measured - Were changes in participants due to program or by chance?
73
Population Effectiveness
Improving health of populations & communities through medical and/or non-medical services
74
What are baseline indicators?
Value of indicator prior to implementation
75
Sequential Design
Basic steps in evaluation process implemented sequentially (either qualitative or quantitative first)
76
Why would HES make modifications to content, format, or presentation of question, questionnaire, or instrument?
- Adapting to data needs - To have results that are more versatile & useful
77
What does most appropriate data collection instrument depend on?
- Intent of program - Intent of evaluation - Information being acquired
78
What can cause lesser approach (compared to most rigorous available) to be used in either research or evaluation?
1. Ethics 2. Cost 3. Politics 4. Availability of resources
79
What is a type IV error?
- Evaluation is conducted for sake of evaluation - Questions are asked about program that no one cares about - Answers are of no interest to decision makers
80
Steps for Conducting CEA
1. Define problem & objectives 2. Identify alternatives 3. Describe production relationships 4. Define perspective/viewpoint of CEA 5. Identify, measure, & value cost 6. Identify & measure effectiveness 7. Discount future costs & effectiveness 8. Conduct sensitivity analysis 9. Address equity issues 10. Use CEA results in decision making
81
What are target indicators?
Expected value of indicator at a specific point in time
82
Random Selection
Random identification from intended population of those who will be in program and/or evaluation
83
Ethical Standards of Participant Data
1. Respect for autonomy 2. Social justice 3. Promotion of good & avoidance of harm 4. Have evaluation/research plan that protects privacy of participants 5. Participant data must be stored, utilized, & disclosed ensuring protection of participant privacy
84
Types of Program costs
- Direct costs - Intervention costs - Indirect costs - Nonintervention costs - Cost savings vs future costs as result of program/implementation
85
Multi-Phase Design
Evaluations divided into multiple parts/phases that are implemented over time
86
What do performance measures require?
1. Object 2. Standard - accepted level of performance expected 3. Indicator - determines whether performance standard is achieved 4. Measure - quantitative representation of capacity, process, or outcome
87
What is a type II error?
Inferring program has no impact when it does (occurs when sample size is too small)
88
External Validity
Generalizability of results beyond participants - Would results be the same with different target population?
89
Social Desirability Effect
Bias that occurs when people answer questions in a way they think will make them seem favorable to others * Threat to internal validity
90
What is Conceptual Use?
- Evaluations produce new information about what goes on in the program through answers to questions raised about a program - Reveals insights about program (what they think of the program, understanding the importance of program) in addressing underlying problem
91
Systems-Analysis Evaluation Model
Uses instruments that serve to quantify program's effects
92
Standards/Steps of Evaluation
1. Engage stakeholders 2. Describe program 3. Focus evaluation design 4. Gather credible evidence 5. Justify conclusions 6. Ensure use & share lessons
93
CBPR
Research in which evaluators collaborate with community members - Improves likelihood of success & stronger impact with target population
94
Data Management Plan
1. Procedures for transferring data from instruments to data analysis software 2. Scoring guide to tell researcher/evaluation team how to code variables
95
Security Rule
- Establishes rules for safeguarding information - Requires IRB - Guidance provided by Code of Federal Regulations
96
What measurements are used in descriptive data?
Frequency, mean, median, mode
97
Response Bias
Intentional or unconscious way individuals select responses
98
Advantages of Meta-Analysis
1. Ability to tell if results are more varied that expected 2. Derived statistical testing of overall factors/effect size in related studies 3. Potential generalization to population of studies 4. Ability to control & use moderators to explain variations between studies
99
Measurement vs Classification
Measurement - process of sorting & assigning #s to people in quantitative evaluations Classification - assigning people into set of categories in qualitative evaluations
100
What aspects are included in an effective evaluation report?
- Timely provision - Effective & detailed summary of how stakeholders were involved - List of strengths & limitations/weaknesses of findings
101
Evidence-based approach in findings & scientific evidence need to be incorporated into what areas of health programming?
- Decision making - Policy development - Program implementation
102
What is utility (evaluation standard)?
Ensure information needs of intended users are satisfied
103
Unbounded vs Bounded Outcomes
Unbounded - possibility of existing before/after program Bounded - outcomes can only occur once at particular time by a specific event/time
104
What are limitations?
Boundaries placed on study by factors or people other than researcher
105
Allocative efficiency
Combining inputs to produce maximum health improvements given available resources
106
Measurement tools must be ____________
Valid & Reliable
107
Types of Designs (most to least rigorous)
1. Systematic reviews 2. RCT 3. Cohort 4. Case-control 5. Case series, case reports 6. Editorials, expert opinion
108
Reliability
Whether results can be measured consistently (can be reproduced under similar circumstances)
109
Mediation Analysis
Identification of pathway between health promotion program, its impact on hypothesized psychosocial mediators, & its effects on behavioral outcomes
110
Informal Interviewing
One--ended conversation with goal of understanding program from respondent's perspective - Continues until no new information is gathered & there is full understanding of the program
111
Types of Program Benefits
- Tangible benefits - Intangible benefits - Economic - Personal health - Social
112
Discriminant Validity
measure of different concepts correlated to each other * type of construct validity
113
What is a sensitivity analysis?
systematic approach for determining whether CEA yields same results if different assumptions are made
114
What is construct confounding in regard to construct validity?
Failure to define all constructs may result in incomplete construct inferences or confusion among constructs
115
What are evaluation questions designed to do?
1. Designated boundaries for evaluation 2. Determine what areas of program are the focus
116
What is attrition?
differences between program and another group due to loss of people from either or both groups rather than the program
117
Limitations in Comparing Evaluation Results
1. Examine & analyze data to look for patterns, recurring themes, similarities/differences 2. Address patterns or lack of patterns that justify/don't justify answers to evaluation questions 3. Possible reasons for deviations in established patterns 4. Study how patterns are supported/negated by previous studies or evaluations
118
Process Evaluation
- Any combination of measures that occurs as program is implemented - Ensures or improves quality of performance or delivery - Assesses how much intervention was provided (dose), to whom, when, & by whom
119
Cost savings vs Future costs of programs?
Cost Savings - savings tha occur from prevention or alleviation of disease Future costs - costs of disease unrelated to intervention
120
Placebo Effect
Individual's health improves after taking fake treatment - In control group and they think they are receiving intervention * Threat to Internal Validity
121
What are different types of evaluation indicators?
1. Baseline 2. Target
122
Ways to Measure Reliability
1. Test-Retest 2. Internal Reliability/Consistency 3. Split-Half Method
123
Decision-Making Evaluation Model
- Uses instruments that focus on elements that yield context, input, processes, & products to use when making decisions - Evaluates criteria that are used for making administrative decisions in the program
124
Split-Half Method
- 2 parallel forms to administered at same point in time - Correlation calculated b/w them
125
Nonresponse Bias
- Lack of responses - Failure of providing data * May be due to attrition
126
Efficiency
How well program and/or intervention can produce positive results few inputs + higher outputs = MORE EFFECIENT
127
How is value threshold used?
Determining & allocating resources to intervention rather than another program
128
Cost-Utility Analysis (CUA)
type of CEA which outcomes of program A & program B are weighted by their value/quality
129
When are evaluation findings justified?
When they are linked to evidence gathered & judged against agreed upon values or standards set by stakeholders
130
Threats to Internal Validity
1. Ambiguous Temporal Precedence 2. History 3. Maturation 4. Testing 5. Instrumentation 6. Regression Artifacts 7. Selection 8. Attrition 9. Expectancy threat 10. Hawthorne effect 11. Social desirability 12. Placebo effect
131
What are performance measures?
Indicators of process, output, or outcomes that have been developed for use as standardized indicators by health programs, initiatives, practitioners or organizations
132
What is inadequate explanation of constructs in regard to construct validity?
Failure to adequately explicate construct may lead to incorrect inferences
133
What is the goal of longitudinal designs?
Track changes in factors over time
134
What is goal of CMA?
Determine which program has lower cost
135
Implementation Documentation
Collecting data specified in process objectives carried out to demonstrate extent of program implementation to FUNDERS
136
What is a disadvantage to using existing data collection tools?
Potential for unreliable measures with different population demographics & situations
137
What is a value threshold?
Benchmark for designating whether service is cost effective
138
Ambiguous Temporal Precedence
Lack of clarity about whether treatment occurred
139
What does implementation assessment provide?
- Managerial guidance & oversight - Informs decision making to which aspects of organizational or service utilization plan are ineffective in accomplishing process objectives
140
Content Validity
Assesses whether test is representative of all aspects of construct
141
Triangulation
Examines changes or lessons learned from different points of view or in different ways
142
What is vote counting?
Defines findings as significantly positive/negative OR nonsignificant
143
What is process use?
Engagement of designing & conducting evaluation that may lead to better understanding & new ways of thinking about the program
144
Intrarater Reliability
Correlation between observations made by same observer at different points in time
145
Cost-Benefit Analysis (CBA)
method of economic evaluation which all benefits & costs of program are measured
146
Cross Sectional Design
Data about program, events, activities, behaviors, & other factors collected at ONE POINT IN TIME
147
Goal-Free Evaluation Model
Instruments provide all outcomes (including unintentional positive/negative outcomes)
148
What needs to be considered when proposing possible explanations of findings?
- Standards Analysis & Synthesis - Interpretation - Judgements - Recommendations
149
Nonrandom Error
Measure is systematically higher or lower than true score
150
What should HES consider when using existing data collection instruments?
- If item is appropriate for intended purpose - If language is appropriate or population - Whether test has been performed using sample from intended audience
151
Outcome Evaluation
- Short term, immediate, & observable effects of program leading to desired outcomes - What changed about public health problem?
152
Implementation Assessment
Ongoing, nearly real-time activity of collecting data for purpose of making timely corrections or modifications to implementation through changes to elements of process theory *AKA Program or Process Monitoring **
153
What is instrumentation in regard to threat to internal validity?
before-after changes due ot changes in the instrument or those administering instrument rather than program
154
What is descriptive data used for?
To decrease large quantity of data into few elemental measurements that entirely describe data distribution
155
What are evaluation indicators of program?
Information or statistics that provide evidence of progress toward outcomes
156
What is mon-operation bias?
Inferences are complicated when definition of construct both underrepresent construct of interest & measures irrelevant constructs
157
Attainment Evaluation Model
Uses evaluation standards & instruments upon elements that yield objectives & goals of programD
158
What should HES/researcher consider when choosing method for data collection?
- Specifically target most important elements of study - Clearly prove or disprove hypothesis - Appropriate to scale of study - Do not cost too much or require too much time
159
Clinical Effectiveness
Improving health of individual patients through medical care services
160
What is evaluation used in needs assessment?
- Evaluating primary, secondary data, observations, & interviews - Evaluating literature
161
What are 2 approaches to meta-analysis?
Vote-Counting & Classic (or Glassian) meta-analysis
162
Quality Assurance
Using minimum acceptable requirements for processes & standards for outputs
163
What are tangible benefits?
Benefits that are quantifiable & measurable
164
Measurement Reliability
type of random error which same measure gives same results on repeated applications
165
what is selection in regard to internal validity?
Difference between program & another group due to differences in people in the groups rather than the program
166
Multiple Method vs Mixed Method Designs
Multiple Method: combining qualitative & quantitative DATA to answer evaluation questions Mixed Method: combing qualitative & quantitative METHODS to answer evaluation questions
167
What is evaluation used in program implementation?
Evaluating progress of program based on health indicators
168
What is regression artifacts in regard to internal validity?
If subjects are selected on basis of their extreme score, before-after changes may be affected partly by extreme scores naturally shifting toward mean
169
Statistical Significance
Likelihood one would be to get result by chance * 0.05 (usually used)
170
What does descriptive data describe?
Data that answers a questions
171
Random Assignment
Process of determining on random basis who does & does not receive health program/intervention
172
What is testing in regard to threat to internal validity?
Before-after changes due to giving pretest rather than program
173
CDC evaluation standards
Utility, Feasibility, Propriety, Accuracy
174
Why is process evaluation important?
1. Understanding internal & external forces that can impact activities of program 2. Maintain and/or improve quality & standards of program performance and delivery 3. May serve as documentation of provisions & success of those provisions of program
175
What are things to monitor/evaluate to ensure efficiency & effectiveness?
1. Simplicity 2. Flexibility 3. Acceptability 4. Sensitivity (proportion of disease) 5. Predictive value positive 6. Representativeness 7. Timeliness 8. Stability
176
What can correlates (relationships b/w variables where on is affected/dependent on another) be derived from?
- Reach & effectiveness - Size of effect
177
Concurrent Validity
Assesses degree measure correlates with an already validated measure * Type of Criterion Validity - constructs may be same or different - Related constructs
178
what can Process Use provide?
Collaboration of different perspectives among interest groups
179
What is Classic or Glassian Meta-Analysis?
- Defines questions to be examined - Collects studies - Codes study features & outcomes - Analyzes relations b/w study features & outcomes
180
What is propriety (evaluation standard)?
Behave legally, ethically, & with regard for welfare of participants of program and those affected by program
181
Production Efficiency
Combining inputs to produce services at lowest cost
182
What is Persuasive Use?
Evaluation results used to support or criticize program
183
What should HES do when only using part of data collection instrument to maintain validity?
- Aspects of questions should be retained - Give credit for using item/collection tool
184
What does SMOG stand for?
Simple Measure of Gobbledegook
185
What are the field procedures for collecting data?
- Protocols for scheduling initial contacts with respondents - Introducing instrument to respondent - Keeping track of individuals contacted - Follow up with non-respondents (when appropriate)
186
What should performance measures be aligned with?
Objectives
187
What are indirect costs of programs?
Lost or impaired ability to work or engage in leisure activities as a direct result of intervention
188
Divergent Validity
Measure of construct does not correlate with other measures that should not be related to
189
HIPPA
Federal regulations for protection of privacy of participant data
190
Multivariable Analysis
Estimates size & direction of program's effect in randomized & non-randomized study designs with treatment and control group
191
What are beneficial outliers?
Outliers that are representative of population
192
Effectiveness
Degree of how successful program is in producing desired result
193
What are multivariate outliers?
Unusual combinations of scores on different variables
194
Clinical Significance
Likelihood intervention is to have noticeable benefit to participants
195
Validity
Accuracy of measurement (do results represent what should be measured)
196
Data Analysis Plan
- How data will be scored & coded - How missing data will be managed - How outliers will be handled - Data screening
197
Convergent Validity
Measure of same concept correlated to each other * type of construct validity
198
What is mono-method bias?
When construct is measured using same method and method is part of the construct itself
199
How does effective data management help evaluator/researcher?
- Organization of data - Ability to access data - Analysis of data - Ensures quality of research - Supports published results
200
How can evaluators have less bias in their data collection?
use evaluation questions that allow for more than 1 answer
201
Evaluation Plan Framework
1. Organize evaluation process 2. Procedures for managing & monitoring evaluation 3. Identify what to evaluate 4. Formulate questions to be answered 5. Timeframe for evaluation 6. Plan for evaluating implementation objectives (process) 7. Plan for evaluating impact objectives 8. Targeted outcomes (outcome objectives)
202
What should be used to guide data analysis?
- Research/evaluation questions - Level of measurement of data - is it for research? evaluation?
203
Rigor
Confidence findings/results of evaluation are true representation of what occurred as result of program
204
What types of systematic errors can occur with findings from evaluation/research?
1. Sampling 2. Design 3. Implementation 4. Analysis
205
What specific readability tools are there to help with this?
SMOG & FleschKincaid
206
Why must evidence be interpreted in regard to health programs?
Allows for determination of significance & drawing relevant inferences to plan future programs/interventions
207
What is included in evidence-based practice approach?
- Combination of best available scientific evidence & data - Program planning frameworks - Community is engaged - Programmatic evaluation - Disseminated results
208
Nominal/Dichotomous measurement & give an example
Cannot be ordered hierarchically but are mutually exclusive - Male/Female - Yes/No
209
Predictive Validity
Assesses degree measure predicts criterion measure assessed at later time
210
Threats to Construct Validity
1. Inadequate explanation of constructs 2. Construct cofounding 3. Non-operation bias 4. Mono-method bias 5. Confounding constructs with levels of constructs
211
Equity
Maximum potential effect under ideal circumstances
212
Expectancy Effect
Occurs when researcher's expectations influence results
213
Why is conducting meta-analysis important when synthesizing data?
Combination of results to answer research hypotheses
214
What is the goal of CUA?
Determine which program produces most at lower cost
215
What are confounding variables?
Extraneous variables or factors outside scope of intervention that can impact results
216
Face Validity
Common acceptance or belief that measure actually measures what it is supposed to measure - Expert decides if scale "appears" to measure construct
217
Criterion Validity
Measure correlates with outcome