Evaluation: People and programs Flashcards

1
Q

Person evaluation (individual level)

A

Recognise people’s achievements

Encourage future performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Program evaluation (systems level)

A

Identify how a policy or intervention is working

Identify areas for improvement/refinement

Decisions to continue/abandon program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Person evaluation circle thing

A
  1. Career planning
  2. Planning Proposal: Concrete goals and expected outcomes
  3. Performance agreement: Agree on performance indicators, targets, timeframe
  4. Performance evaluation: Compare indicators with targets
  5. Feedback discussion: Revise and update career goals
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

QUT’S Performance and evaluation process: To be completed at beginning of cycle

A
  1. Section 1: Career development Planning: Where staff may indicate their long term academic career goals
  2. Section 2: Planning proposal: Where staff initially draft then finalise their goals for the next twelve months under each of the three academic areas of achievement, and identify the support and resources they may require to achieve these goals.
  3. Section 3: Performance Plan agreement: Where the staff member and the supervisor agree on the outcomes of the planning discussion and subsequent performance plan
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

QUT’S Performance and evaluation process: To be completed at the end of 12 month cycle

A
  1. Review of performance: At then end of a 12month PPR-AS cycle, the supervisor documents and provides performance feedback to the staff member and the staff member makes a self-assessment.
  2. Confirmation of performance feedback discussion: Supervisors to sign off on PPR discussions. Staff member to sign off on PPR discussion, signature denotes participation in the process and supervisor’s comments acknowledged.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why do we evaluate people?

A
Assign rewards (e.g., bonuses)
Identify where help/training is needed
Identify when people can take on greater challenges and responsibilities (e.g., promotion)
Reinforce good behaviours
Extinguish bad behaviours
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What do we evaluate? Broad level

A

Performance
task-related behaviour

Effectiveness
evaluation of standard of performance

Productivity
cost of achieving level of effectiveness
time, money, burnout

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What do we evaluate? Detailed level

Bartram’s (2005) “Great Eight” competency model

A
  1. Leading/deciding
  2. Supporting/cooperating
  3. Interacting/presenting
  4. Analysing/interpreting
  5. Creating/conceptualising
  6. Organising/executing
  7. Adapting/coping
  8. Enterprising/performing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Who evaluates?

A

360 feedback

  • Team members
  • Supervisor
  • Other peers
  • Clients
  • Subordinates
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do we evaluate? (Landy & Conte, 2013)

Objective measures

A

Quantitative measure of production, e.g., sales, outputs
Academics:
Research (papers published; article views; number of citations, h-index, i-10 index…)
Teaching (unit ratings; number of units taught)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do we evaluate? (Landy & Conte, 2013) Judgemental measures

A

allows consideration of context factors not captured by objective measures
e.g., supervisor’s overall impression
rating compared to other employees, influenced by perceived difficulty of job or other contextual information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why abandon annual performance reviews? (Rock & Jones, 2015)
Measurement factors

A
annual reviews are disconnected from timescales of work
multiple factors (and people) contribute to one’s performance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why abandon annual performance reviews? (Rock & Jones, 2015) Psychological factors

A

Competition v. collaboration
Performance rankings set up competitive mindsets
Doesn’t satisfy needs for learning and growth
Fostered by more immediate feedback
More frequent communication = more informative feedback
Shifts from debating performance ratings to discussing development opportunities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Criteria: Evaluations should be:

A

Developmental
Tied to organisational objectives
Specific
Sufficiently frequent
according to a person’s experience (less experience, more frequent)
task timeline (shorter task timelines, more frequent)
timely for decision-making

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How will you be able to tell if your proposed program or intervention works?
PROGRAM EVALUATION

A

Circle

  1. Engage stakeholders: Understanding key issues
  2. Describe program: Goals and purpose, expected effects
  3. Design evaluation: Methods, users, agreements
  4. Gather evidence: Indicators, sources, quality, quantity, practicality
  5. Justify conclusions: Analyses, interpretation, recommendations
  6. Implement lessons: Tweak or revise program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Program Evaluation: Standards: Utility

A

Information will serve users needs:

  • Measures are directly related to program goals
  • It is clear what the findings can show
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Program Evaluation: Standards: Feasibility

A

Information is practical to collect:

  • Use of resources in proportion to the program/issue
  • Methods require skills within organisation’s capacities
18
Q

Program Evaluation: Standards: Propriety

A

Information is obtained ethically:

  • Respect the rights and welfare of those involved
  • Adheres to relevant laws and workplace policies
19
Q

Program Evaluation: Standards: Accuracy

A

Information is accurate:

  • Reliability: indicators produce consistent responses
  • Validity: Indicators fairly represent the construct/idea
20
Q

Describe program

A
What is the main purpose of your specific program/intervention? For example…
change behaviours directly
change attitudes 
change social norms
change public awareness

Evaluation needs to be linked to your purpose

21
Q

Design evaluation: Types of evaluation (Jones & Rogers, 1999): Proactive

A
  • Assessing need for a program

- If program required, assessing how it has been done elsewhere

22
Q

Design evaluation: Types of evaluation (Jones & Rogers, 1999): Clarificative

A

-Understand rationale and plausibility of a proposed program prior to commitment

23
Q

Design evaluation: Types of evaluation (Jones & Rogers, 1999): Interactive

A
  • Inform stakeholders about issues and concerns with a planned program
  • Use knowledge and expertise to improve a planned program
24
Q

Design evaluation: Types of evaluation (Jones & Rogers, 1999): Monitoring

A

Assessing program processes

Is process implementing program on track?

25
Q

Design evaluation: Types of evaluation (Jones & Rogers, 1999): Impact

A

Identify the program/intervention outcomes

Inform decisions about future programs, e.g. whether to use approach in other settings

26
Q

Ideal features of evaluation measures (Waddell et al, 2011)

A

1.Derived from the theory driving the intervention
2.Includes measures of…
the intervention itself (whether it was implemented correctly)
its impact (consequences)
3.Include both immediate and longer-term consequences
4.Valid
measures are clearly related to the theoretical construct
5.Reliable
e.g., survey items are rated consistently – highly correlated with each other
6.Uses multiple methods
to make up for weaknesses of any single type of measure

27
Q

Gather evidence

A

Who? Participants
What? Materials
How and when? Procedure

28
Q

Gather evidence types of measures

A

Types of measures

  • Behaviours
  • Behavioural intentions
  • Attitudes
  • Perceptions
  • -Knowledge
  • -Social norms
  • –Descriptive: reports of observed behaviours (what people actually do)
  • –Injunctive: shared expectations about what people should/should not do
  • Awareness
Procedures: 
Field studies 
Experiments
Surveys 
Interviews
Case studies
29
Q

Program evaluation>Gather evidence> Behaviour (Field Study)

A

Keizer, Lindenberg & Steg (2008): Descriptive norms and littering behaviour (field experiment)
Manipulated descriptive norms for antisocial behaviour (graffiti on wall)
Placed annoying flyer on all bikes (with no bin in area)
Observed how often people littered the flyer when they collected their bike

30
Q

Behaviour (Survey): Bain et al (2016): pro-environmental financial behaviour

A

You will be entered into a prize draw for $150

We would like to know if you would allow us to donate some or all of this prize (if you win) to a pro-environmental organisation

Please nominate an amount ($0 to $150) for us to donate on your behalf (we will provide evidencewe did so)

31
Q

Behavioural Intentions/Attitudes/Perceptions

A

A hierarchy for selecting intention, attitude and norm measures

USE existing scales when suitable

ADAPT existing scales when possible

DEVELOP new scales when there are no suitable/adaptable existing scales

32
Q

Behavioural Intentions/Attitudes/Perceptions: Issues in creating items

A

Issues in creating items…
leading or biased questions
eg. “how much do you dislike getting up early in the morning”

double-barrelled questions (unless topics very closely aligned)
eg. “please rate the extent to which you enjoy lectures and tutorials”

use scales likely to match the detail people are likely to use
eg. 5-, 7-, 9-point scales; percentages

33
Q

Attitudes and ways of measuring them

A

Attitudes
Direction (positive, negative)
Intensity (weak/strong)

Different ways of measuring attitudes
Paired comparisons
Guttman scaling
Semantic differentials
Likert scale
Implicit association test (IAT)
34
Q

Semantic differentials

A

Developed by Osgood et al (1957)
historically popular way of measuring attitudes

Uses predetermined dimensions

e.g., Rate the group below on the following scales, by placing an ‘X”

		   Martians
Beautiful	\_\_\_: \_\_\_: \_\_\_: \_\_\_: \_\_\_: \_\_\_: \_\_\_ Ugly		
Pleasant	\_\_\_: \_\_\_: \_\_\_: \_\_\_: \_\_\_: \_\_\_: \_\_\_ Unpleasant 
	Bad 	\_\_\_: \_\_\_: \_\_\_: \_\_\_: \_\_\_: \_\_\_: \_\_\_ Good

Limitation:
well suited to evaluating people/groups, but can be more difficult to apply to some objects and issues, e.g., gun control

35
Q

Likert scales

A

Construct items to reflect favourable or unfavourable views towards attitude object
could be beliefs, behavioural reports, emotional reactions

Rate all items on a scale
such as “Strongly agree - Strongly disagree”

Sum scores to obtain overall rating of attitude
favourable/unfavourable

Limitations:
Lack of depth: don’t know the basis of agreement ratings
why are people are agreeing or disagreeing?
Reliability: need to establish that items are measuring the same things
if developing scales, try to have at least 5 items for each construct

36
Q

Norms (survey)

A

Ajzen (2002): Measuring norms in the Theory of Planned Behaviour

Direct measures (2 aspects)

descriptive (what others do)
many people who are important to me do…
people in my life whose opinions I value do…

injunctive (what others expect of you)
many people who are important to me think that I should…
it is expected of me that I will…
people in my life whose opinions I value would approve of…

Belief-based measures (2 aspects)

strength (measurement similar to injunctive)
My family thinks that I should…
usually tied to distinct reference groups

motivation to comply
E.g., generally, how much do you want to do what your family thinks you should do?

37
Q

Knowledge (survey

A

Knowledge
Use experts to identify content knowledge
“quiz” people about this knowledge
Where relevant ask their confidence in their knowledge

38
Q

Awareness (interview/survey)

A

Robb et al., (2009). Cancer awareness
face to face interview
with computer assistance and survey questions

39
Q

Multi-method

A

Field studies are a common form of multi-method research
a general term for methods emphasising observing and interacting with people in natural contexts

These approaches include
observation – observe/record
could be observing behaviour directly, or records of behaviour (eg. transcripts of conversations)
inquiry – interview people in relevant situations
gain knowledge about context as well as people
breach – record reactions to unexpected actions
e.g., ethnomethodology – reactions to norm-breaking

40
Q

Program evaluation

A

Circle:

  1. Engage stakeholders
  2. Describe program: Goals and purpose, expected effects
  3. Design evaluation: Methods, users, agreements
  4. Gather evidence: Indicators, sources, quality, quantity, practicality
  5. Justify conclusions: Analyses, interpretation, recommendations
  6. Implementing lessons: Tweak or revise program