Evaluation Designs Flashcards

(54 cards)

1
Q

What are the 3 main stages of evaluation?

A

Formative

Process

Outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe the formative stage of an evaluation

A

Happens before any intervention to evaluation.

Tests the acceptability + feasibility of the intervention.

Mainly qualitative i.e focus groups and in-depth interviews

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe the process stage of an evaluation

A

Happens whilst the intervention is underway.

Measures how the intervention was derived + received.

Mixed quantitative and qualitative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe the outcome stage of an evaluation

A

Measures whether the intervention has achieved its objectives.

Mainly quantitive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the main purpose of an evaluation design?

A

To be as confident as possible that any observed changes were caused by the intervention, rather than by chance or other unknown factors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

List the criteria for inferring causality

A

Cause must precede the effect

Plausibility

Strength of the association

Dose-response relationship

Reversibility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Criteria for inferring causality

How is the strength of the association measured

A

Effect size

or

Rel. risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Criteria for inferring causality

What comes under the dose-response relationship

A

Occurs when changes in the level of a possible cause are associated with changes in the prevalence or incidence of the effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Criteria for inferring causality

What is meant by reversibility?

A

When the removal of the possible cause results in a return to baseline for the outcome.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does high internal validity mean

A

High means the differences observed between the groups are related to the intervention tested in the trial.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define external validity

A

Extent to which the results of an experiment of an intervention can be generalised to the target or general population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the types of Evaluation design

A

Experimental - Randomly assigned controls or comparison groups

Quasi-Experimental - Not randomly assigned controls or comparison groups

Non-experimental - No comparison or control group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Strengths to experimental evaluation design

A

Can infer causality with highest degree of confidence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Weaknesses to experimental evaluation design

A

Most resource intensive of the evaluation designs

Req ensuring minimal extraneous factors

Can sometimes be challenging to generalise to the “real world”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Strengths to the quasi-experimental evaluation design

A

Can be used when unable to randomise a control group but still allows comparison across groups +/or time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Weaknesses to the quasi-experimental evaluation design

A

Differences between comparison groups may be confound

Group selection is critical

Moderate confidence in inferring causality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Strengths to the non-experimental evaluation design

A

Simple

Used when baseline data +/or comparisons groups are not available

Good for a descriptive study

May req fewer resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Weakness to the non-experimental evaluation design

A

Minimal ability to infer causality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the types of RCT (Experimental design)

A

Randomised cross-over trials

Parallel randomised trials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the purpose of random assignment

A

To best ensure the intervention is the only difference between the 2 groups.

To ensure any factors influencing the outcome are evenly distributed between the groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

List the main threats to internal validity in RCT (Experimental designs)

A

Selection bias

Performance bias

Detection bias

Attrition bias

Random Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Define selection bias

A

When individuals in both groups differ systematically on a factor that may affect the outcome thereby leading to a systematic error in outcome.

23
Q

What can decrease selection bias

A

Randomisation and a matched control group.

24
Q

Define performance bias

A

Occurs if there’s insufficient adherence to the study protocol by researchers or participants.

i.e researchers may not deliver the intervention consistently to all participants and participants may differ in how they adhere to the intervention.

25
Define detection bias
Researchers can administer outcome measures differently between groups. Participant receiving an intervention they like may over report changes in behaviour.
26
How can performance and detection bias be avoided?
By blinding researchers +/or Blinding participants to group allocation
27
Define attrition bias
Systematic differences in the number of drop outs from the study between intervention and control groups.
28
What can attrition bias lead to?
Systematic differences in groups at follow up if its not balanced.
29
How can attrition bias be managed?
By intention to treat analysis methods
30
Advantages to RCT (Experimental design)
Can be most confident that any observed changes can be attributed to the intervention and not any other factors. Allows randomisation of participants to both groups + concealment of their allocation ensures selection bias and confounding or unknown variables are minimised
31
Which experimental design is regarded as having high internal validity
RCT
32
Disadvantages to RCT (Experimental design)
Expensive Time consuming Can have high drop out rates if the intervention has undesirable side-effects or little incentive to stay in the control arm Ethical consideration may mean research Q can't be investigated using RCT Prior knowledge is requires for sample size calculation Can have issues with generalisability (participants volunteering to participate may not be representative of the population being studies) - Low external validity
33
Cluster RCT (Experimental Evaluation design option)
When the unit of randomisation is not individuals. Instead - clusters of individuals in naturally occurring groups.
34
Is there randomisation in cluster RCT
Yes Clusters are randomly allocated to the intervention or control group
35
When are cluster RCT mainly used
When the target of the intervention is the cluster OR When its not feasible to prevent contamination in ind RCTs
36
Advantages to Cluster RCT
Evaluates the real-world effectiveness of an intervention as opposed to efficacy. Provides an alternative methodology for assessing the effectiveness of interventions in settings where randomisation at the individual level is inappropriate or impossible.
37
Disadvantages to cluster RCT
Complex + expensive Req a larger number of ppl vs ind RCT designs due to ppl within clusters potentially being more similar to each other than would be expected by chance. Getting balanced groups is more difficult - this can decrease internal validity Analysis is complex
38
When can quasi-experimental designs be used
When random assignment is NOT possible
39
Quasi-experimental designs Controlled before and after intervention design
Same layout as RCT but NO random assignment to groups
40
Quasi-experimental designs Controlled before and after intervention design What could it be at risk from?
Selection bias
41
What statistical methods are used for quasi-experimental designs?
Difference-in-difference analysis Regression analysis
42
What is difference-in-difference analysis used for?
To compare changes before and after the program for individuals in the program + control groups
43
What is regression analysis used for?
To address the issue of confounding variables by controlling for differences at baseline.
44
Advantages to Quasi-Experimental Designs
Provides some assurance that outcomes are actually the results of the program Most practical option for conducting outcome evaluation in community interventions. Using pre-existing or self-selected groups avoids the additional steps involved w/ randomisation. Overcomes potential ethical concerns involved in withholding/delaying treatment. Good for when resources for evaluation are limited
45
Disadvantages to quasi-experimental designs
Could demand more time Req access to at least 2 similar groups W/out randomisation - study groups may differ in important ways that account for some of the group differences in the outcomes after the intervention. Selection bias Misclassification or outcome + confounding
46
What type of experiment does the Interrupted time series design come under?
Quasi-Experimental
47
Which experimental design is best for overcoming the problems of secular trends?
Interrupted time series design
48
Advantages of an interrupted time series design
Can detect whether program effects are ST or LT Series of tests b4 intervention can eliminate need for control group + can be used to project expected results Can be used if only have 1 study site to conduct evaluation Can detect secular trends
49
Disadvantages of an interrupted time series design
Problem of confounding Changes in instruments during the series of measurements Loss or change of cases can cause changes in group composition
50
How can non-experimental designs be strengthened?
By constructing a plausibility argument + controlling for confounding variables
51
When do you tend to use the Before + after (pre-post) non-experimental design
When you don't have a comparison or control groups.
52
Advantages to before + after (pre-post) non-experimental design
Simple Control for participants prior to knowledge/skills
53
Disadvantages to before + after (pre-post) non-experimental design
Can't account for non-program influences on outcomes Causal attribution not possible Can't detect small but important changes Can't rule out secular trends Subject to selection bias
54
What may happen to the control group in a quasi-experimental design?
May receive different intervention all together May receive selected components of the intervention being tested May use a wait-list control: Those in the control don't receive anything in the study period but will eventually or instead a paired down version once final analysis of trial has been undertaken.