Chapter_16_Evaluation Research Flashcards

(50 cards)

1
Q

Needs Assessment

A
  1. social indicators
  2. perceptions of needed community services
  3. policy maker’s intuition
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Solution development

A
  1. identifying the client population
  2. the content of the program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Designing an effective change program

A
  1. the CAUSES of the phenomenon to be changed
  2. the CONTEXT
    - characteristics of the population
    - prevalence and severity of the problem
    - change over time
  3. COMPLEXITY of the problem
  4. DESIGN the intervention
    - number of interventions
    - how actively the participants need to engage with the program materials or the program staff
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Evaluability Assessment

A
  1. goal
  2. consequence
  3. impact
  4. theory
  5. implementation guideline
  6. resource
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Goal Specification

A
  1. Consequences
    - knowledge
    - attitudes
    - behaviors
  2. Impact
    - timing of effects
    - magnitude of effects
    - durability of effects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Proximal outcomes

A
  • direct effects
  • occur while the clients are taking part in the program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Distal outcomes

A
  • indirect effects
  • after the client has completed the program
  • in environments not controlled by the program
  • at a higher level of analysis than that addressed by the program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Social impact outcomes

A
  • take place in the client’s broader social environment outside the program
  • reflected in social indicators
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Common stakeholder

A
  • policy makers
  • sponsors
  • designers
  • administrators
  • staff
  • clients
  • opinion leaders
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Program Theory

A

specifies the kinds of treatments clients should receive
- expected outcome
- moderating variables
- mediating variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

policy variables

A

aspects of the situation that program administrators can manipulate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

estimator variables

A

are outside administrators’ control but still affect program outcomes
- psychological states
- proximal outcomes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

advantages of including the program theory in the evaluation

A
  1. test of the VALIDITY of the theory
  2. determine which COMPONENTS are necessary to accomplish the goals
  3. specify the CONDITIONS necessary for the program
  4. suggest REASONS why a program is not effective
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Program monitoring

A

process evaluation
- continuing assessment of how well the program is being implemented while it is being carried out

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Formative evaluation

A

monitor the process or development of a program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

summative evaluation

A

assess the overall effectiveness of a program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How can the program reach its target population?

A
  1. include an advertising component
  2. bias in accepting people into the program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Assessment of program implementation

A

compare what is actually happening in the program to what is supposed to happen

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Sources of Implementation Failure

A
  1. lack of specific CRITERIA and procedures for program implementation
  2. insufficiently TRAINED staff
  3. inadequate SUPERVISION of staff
  4. a staff who do not BELIEVE in its effectiveness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Client resistance can be reduced by…

A
  1. including all stakeholder groups in the design of the program to ensure that all viewpoints are considered
  2. explain any unchangeable aspects of the program that caused concern for members of the focus groups
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Criteria for Evaluating Impact

A
  1. DEGREE of Change
    - relative to each of the goals and desired consequences of the program
  2. IMPORTANCE of the Change
    - percentage of clients who meet the program’s goals
    - number of goals achieved
    - durability of the outcomes
  3. COSTS of the Program
  4. ACCEPTIBILITY of the Program
22
Q

Operational definition of improvement

A
  1. outcome score fell outside the range of scores of an untreated control population
  2. score fall closer to the mean of the population not in need of treatment than to the mean of the population in need of treatment
  3. within the range of scores of a population not in need of treatment
    - no more than 2SD below the mean of the criterion group
23
Q

Costs of the Program

A
  1. cost of administering the program
  2. method of providing treatment
  3. professional qualification
  4. costs to clients
  5. psychological or social
  6. costs to staff
24
Q

Answering the Research Questions

A
  1. Effective
  2. Aspects of the Program that Are Important
  3. How to improve
  4. validity
25
program package design
several aspects or components that could be investigated separately
26
comparative outcome design
compare the effectiveness of two different programs
27
dismantling design
tests the necessity of including a component in a program
28
client and program variation design
- whether a program is equally effective for all client groups - whether it is equally effective if implemented in different ways
29
constructive design
a component is added to a successful program to determine if the addition will improve goal attainment
30
parametric design
varies the degree to which clients experience a component of the program
31
True Experiments
determine if the program caused any changes that are observed among the program’s clients - random assignment
32
Quasi-Experime
- nonequivalent control group design
33
patched-up quasi-experiments
conducted after a program has been instituted rather than being built into the program - control groups must be formed after
34
Threats to Internal Validity in Evaluation Research
1. treatment DIFFUSION 2. staff to COMPENSATE the control group 3. control group might feel RIVALRY with the treatment group 4. RESENTFUL demoralization 5. local HISTORY
35
treatment diffusion
Members of the control group learn about the treatment solution - treatment and control groups are geographically separated and unlikely to communicate
36
Pre-Experimental Designs
a pretest–posttest designs does not include a control group - impossible to come up with a no-treatment control group - pilot studies to determine if a more costly full evaluation is necessary
37
Meta-Analysis
the average of a set of scores is a more accurate indicator of a true score - possible moderators of a program’s effectiveness - identifying the conditions under which a program is more or less effective
38
Sources of Null Results
1. program failure, a true null result 2. implementation failure 3. evaluation failure 4. low statistical power 5. using measures of unknown psychometric properties
39
When “Null” Results Are Not Null
- lower cost - more quickly - more acceptable to clients
40
Cost–benefit analysis
compares the dollar cost of operating a program to the benefits that occur when objectives are achieved - all outcomes can be expressed in monetary terms
41
nudge programs
- simple - low or no cost - minimal or no action from potential beneficiaries
42
Cost–Effectiveness Analysis
compares the cost of a program to the size of its outcomes
43
Information Utilization
use this information to make decisions about the continuation, termination, or modification of the program
44
Instrumental utilization
making decisions or solving problems
45
Conceptual utilization
influences a policy maker’s thinking about an issue - even if it doesn’t have a direct influence on decisions about the issue
46
persuasive utilization
- convince others to support a political position - defend a political position from attack
47
Criteria for Research Utilization
1. Relevance - all stakeholder 2. Truth - agree with other information - meet the user’s expectations - Perceived validity 3. Utility - IV = policy variables - DV = high-utility - provides information that can aid in program development
48
The Political Context
1. Ritual Evaluations - no one has any intention of applying their results to the program - for persuasive purposes 2. Stakeholder Interests - favorable to their interests - downplay unfavorable information - using selective citation of findings
49
improve relationships between stakeholders and evaluators
1. Involve stakeholders early 2. stakeholders are clear about their role 3. stakeholders’ comfort level with the evaluation process 4. address benefits 5. Anticipate sources of potential conflict 6. stakeholders can anticipate implementation and outcome problems
50
Measuring Change
1. Difference Scores 2. Reliable Change Index = Diff/SE