Lesson 11: Training Evaluation Flashcards

1
Q

What is training evaluation?

A

Training evaluation is a process concerned with assessing the value of training programs to employees and organizations, using various techniques to gather objective and subjective information before, during, and/or after training.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the training evaluation continuum?

A

The training evaluation continuum ranges from simple evaluations focusing on trainee reactions to more elaborate procedures that assess learning, motivation, confidence, and the work environment’s support for new skills.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why do organizations conduct training evaluations?

A

Training evaluations help fulfill managerial responsibility to improve training, identify useful training programs and trainees, determine cost benefits, ascertain program results, diagnose strengths and weaknesses, and justify the value and credibility of the training function.

As of the 2000s, about 50% of organizations conduct evaluations, with most focusing on easily measured reactions and learning. Organizations with stronger learning cultures conduct more evaluations and use more sophisticated techniques.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the paradox in training evaluations?

A

The paradox is that while improving individual and organizational performance is the central objective of training for organizations, these aspects are the least frequently evaluated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the two categories of barriers to training evaluation?

A

The two categories of barriers to training evaluation are pragmatic and political barriers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the main pragmatic barriers to training evaluation?

A

Pragmatic barriers include the perceived complexity of evaluation models and techniques, the time and effort required for data gathering and analysis, and the costs associated with evaluation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How has modern information technology affected training evaluation?

A

Modern information technology, such as web-based questionnaires and computerized work-performance data, has made it easier and cheaper to conduct high-level evaluations than ever before

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the main political barriers to training evaluation?

A

Political barriers include concerns about conflict of interest, the fear of revealing ineffective training programs or approaches, and the lack of accountability for training results among trainees, their managers, and training program administrators.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How can the issue of accountability affect training evaluation?

A

When trainees, their managers, and those who develop and administer training programs are more accountable for results, training will serve organizational success more clearly.

However, the current lack of accountability may lead to good programs being dropped and poor ones perpetuated, which is a disservice to the training function and the organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the different types of training evaluations based on the data gathered and analyzed?

A

The different types of training evaluations based on the data gathered and analyzed are:

Trainee perceptions evaluation
Behavioral data evaluation
Evaluation of psychological states
Evaluation of work environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the focus of most training evaluations?

A

The focus of most training evaluations is on trainee perceptions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the purpose of more complete evaluations?

A

The purpose of more complete evaluations is to assess the extent of trainee learning and the post-training behaviors of trainees.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the psychological states that affect learning and behavior change?

A

The psychological states that affect learning and behavior change are:

Affective state
Cognitive state
Skills-based state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How is the work environment evaluated in training evaluations?

A

The work environment is evaluated in training evaluations by assessing the transfer climate and learning cultures. Understanding the organization’s culture, climate, and policies can strongly affect training choices and effectiveness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What factors influence training success?

A

The factors that influence training success are:

Opportunities for on-the-job practice of new skills

Level of support provided by others to new learners

Alignment of training courses with the firm’s strategic vision

Improvement in the performance of participants whose remuneration depends on performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the difference between formative and summative evaluations?

A

The difference between formative and summative evaluations is:

Formative evaluations are designed to assess the value of the training materials and processes with the goal of identifying improvements to the instructional experience.

Summative evaluations are designed to provide data about a training program’s worthiness or effectiveness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Who are formative evaluations of special interest to?

A

Formative evaluations are of special interest to training designers and instructors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Who are summative evaluations of greatest interest to?

A

Summative evaluations are of greatest interest to senior management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the difference between descriptive and causal evaluations?

A

The difference between descriptive and causal evaluations is:

Descriptive evaluations provide information describing trainees once they have completed the program.
Causal evaluations are used to determine whether the training caused the post-training learning and/or behaviors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What kind of data gathering and statistical procedures do causal evaluations require?

A

Causal evaluations require more complex data gathering and statistical procedures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Are causal evaluations frequently used?

A

Causal evaluations are infrequently used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the Kirkpatrick model of training evaluation?

A

The Kirkpatrick model of training evaluation is a hierarchical model that identifies four levels to assess training: reactions, learning, behavior, and results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What does the Kirkpatrick model suggest about the relationship between the four levels?

A

The Kirkpatrick model suggests that each level has a causal link to the next level. Success at a particular level causes success at the next one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the fifth level added to the Kirkpatrick model in a more recent articulation?

A

The fifth level added to the Kirkpatrick model in a more recent articulation is return on investment (ROI).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is the purpose of measuring trainee reactions in the Kirkpatrick model?

A

The purpose of measuring trainee reactions in the Kirkpatrick model is to assess the value of the training materials and processes with the key goal of identifying improvements to the instructional experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is the limitation of the Kirkpatrick model for formative evaluations?

A

The limitation of the Kirkpatrick model for formative evaluations is that the relationship between reactions, learning, and behavior is very small, so improving Level 1 (reactions) or Level 2 (learning) is unlikely to improve the impact of training at the behavior (transfer) level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are some alternative evaluation models to the Kirkpatrick model?

A

Some alternative evaluation models to the Kirkpatrick model are the COMA model, the Decision-Based Evaluation model, and the Learning Transfer System Inventory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is the COMA model of training evaluation?

A

The COMA model is a formative evaluation model that enhances the usefulness of training evaluation questionnaires by identifying and measuring variables that research has shown to be important for the transfer of training. These variables fall into four categories: cognitive, organizational environment, motivational, and attitudinal variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What are the four categories of variables in the COMA model?

A

The four categories of variables in the COMA model are cognitive variables, organizational environment variables, motivational variables, and attitudinal variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are cognitive variables in the COMA model?

A

Cognitive variables in the COMA model refer to the level of learning that the trainee has gained from a training program. Both declarative and procedural learning might be measured, but the latter is more important because it is more strongly related to transfer than the former.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are organizational environment variables in the COMA model?

A

Organizational environment variables in the COMA model refer to a cluster of variables that are generated by the work environment and that impact transfer of training. These include the learning culture, the opportunity to practice, the degree of support that is expected, and the level of support actually provided to trainees once they return to the job.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are motivational variables in the COMA model?

A

Motivational variables in the COMA model refer to the desire to learn and to apply the learned skill on the job. COMA suggests that training motivation (measured at the onset of the program) and motivation to transfer (measured immediately after) both be measured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is the purpose of using the COMA model for training evaluation?

A

The purpose of using the COMA model for training evaluation is to assess the degree to which trainees have mastered the skills, perceive the degree to which the organizational environment will support and help them apply the skills, are motivated to learn and to apply the skills on the job, and have developed attitudes and beliefs that allow them to feel capable of applying their newly acquired skills on the job.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are some limitations of the COMA model?

A

Some limitations of the COMA model are that it is relatively new, it is focused exclusively on an analysis of the factors that affect transfer, it is not well-suited for summative evaluation purposes, and different questionnaires must be constructed for different training programs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is the Decision-Based Evaluation model?

A

The Decision-Based Evaluation (DBE) model is a training evaluation model developed by Kurt Kraiger that requires evaluators to select their evaluation techniques and variables based on the decisions needed.

It specifies three potential “targets” for the evaluation: trainee change, organizational payoff, and program improvement.

The model also suggests identifying the focus of the evaluation, which can include different variables depending on the target.

Finally, the appropriate data collection method is suggested based on the focus of the evaluation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

How does DBE differ from Kirkpatrick’s and COMA models?

A

Unlike Kirkpatrick’s model and COMA, DBE allows for different variables to be measured depending on the goals of the evaluation. DBE is also more flexible and can be used for both formative and summative evaluations. DBE is the only training evaluation model that specifies key questions to guide evaluations, such as “What do we choose to evaluate?” and “How can we do so?”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What are the potential “targets” for the evaluation in the DBE model?

A

The DBE model specifies three potential “targets” for the evaluation: trainee change, organizational payoff, and program improvement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is the focus of the evaluation in the DBE model?

A

The focus of the evaluation in the DBE model can include different variables depending on the target of the evaluation. For example, the focus may be on assessing the level of trainee changes with respect to learning behaviors or psychological states such as motivation and self-efficacy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is the Learning Transfer System Inventory (LTSI)?

A

The Learning Transfer System Inventory (LTSI) is a more generic approach to training evaluation proposed by Elwood Holton and colleagues. It aims to alleviate the constraint on training evaluation in organizations where specialized resources are not always available.

The LTSI is a questionnaire that assesses 16 variables important for the transfer of training, including all of the COMA dimensions plus additional ones such as learner readiness, resistance/openness to change, and opportunity to use learning.

The LTSI questionnaire contains 89 questions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

How many questions are included in the LTSI questionnaire?

A

The LTSI questionnaire contains 89 questions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What scale do trainees use to answer the LTSI questions?

A

Trainees answer the LTSI questions using a five-point scale ranging from strongly agree to strongly disagree.

42
Q

What are some limitations of using the LTSI?

A

Limitations of the LTSI include that it is a proprietary instrument, requiring organizations to obtain permission from the authors to use it, administer all 89 questions, and provide data collected to the authors. Some organizations may also find it less applicable due to confidentiality or legal reasons, and its length may be an issue for those unable to administer a lengthy questionnaire.

43
Q

Main variables measured in training evaluation

A

Reactions
Learning
Behaviour
Motivation
Self-Efficacy
Perceived and/or Anticipated Support
Organizational Perceptions
Organizational Results

44
Q

What are reaction measures in training evaluation?

A

Trainee opinions and attitudes about a training program are called reaction measures, which are the most common variables measured in evaluation studies.

45
Q

What are the main types of questions used to measure trainee reactions in training evaluation?

A

The main types of questions used to measure trainee reactions in training evaluation are Likert scales, open-ended questions, and personal/group interviews (focus groups).

46
Q

What are the two types of reaction measures, and which one is generally preferable?

A

The two types of reaction measures are affective and utility reaction measures. Utility reaction measures are generally preferable since they demonstrate some relationship to higher-level outcomes such as learning and behavior.

47
Q

What are the advantages of using reaction measures in training evaluation?

A

The advantages of using reaction measures in training evaluation are that they provide trainers with immediate feedback on their course, are easy to collect and analyze, and are easily understood by managers and employees.

48
Q

What are the limitations of using reaction measures in training evaluation?

A

The limitations of using reaction measures in training evaluation are that they are unable to estimate transfer levels, and may only be used to evaluate the performance of trainers rather than assessing the value of a course.

49
Q

What is the importance of assessing learning in training evaluation?

A

It is important to assess learning in training evaluation as research has shown that participants anticipating a post-training test are more attentive and more motivated to learn the training material, and it also provides diagnostic information for trainers to improve their training program.

50
Q

What is the distinction between declarative and procedural learning in training evaluation?

A

Declarative learning refers to the acquisition of facts and information and is relatively easy to measure, while procedural learning involves the organization of facts and information into a smooth behavioral sequence, which is more difficult to measure but is significantly related to behaviors and transfer of training.

51
Q

What are some techniques for measuring procedural learning in training evaluation?

A

Procedural learning can be measured using simulations conducted in realistic situations, such as performance tests or work sample tests, or through interviews with task experts who demonstrate the proper actions and proper sequence of behaviors required.

52
Q

What are the limitations of measuring declarative learning in training evaluation?

A

While declarative learning is easy to measure, research has shown that it has slight, if any, relationship to behaviors, and therefore, it may not be a good indicator of the worthwhileness of a training program.

53
Q

What is behavior in the context of training evaluation?

A

Behavior refers to the display of newly learned skills or competencies on the job, also known as transfer of training, and is considered the most important of all training effectiveness criteria.

54
Q

What are the three basic approaches for measuring behaviors?

A

The three basic approaches for measuring behaviors are self-reports, observations by others, and production indicators.

55
Q

Self-reports are the most frequently used measures of behavior because they are the easiest and most practical measure to collect.

A

Self-reports are the most frequently used measures of behavior because they are the easiest and most practical measure to collect.

56
Q

What are the potential issues with the accuracy of self-report measures?

A

The accuracy of self-report measures can be problematic since people tend to be inaccurate in reporting their own behaviors, although self-reports might still be valid.

57
Q

Why is it important to zero-in on specific behaviors when measuring behavior?

A

It is important to zero-in on specific behaviors when measuring behavior because measures of specific behaviors are more likely to be valid and accurate than general ones.

58
Q

What is a performance index?

A

A performance index, sometimes called an objective measure, is a type of behavior data that might be gathered in an evaluation of training effectiveness, such as sales performance drawn directly from company records.

59
Q

What is the recommended time lag for the assessment of behavior following a training program?

A

The time lag for the assessment of behavior can range from a few weeks to two years or more in the case of managerial skills. It is recommended that the measurement of behavior take place at several points following a training program in order to determine the long-term effects of a training program

60
Q

What are the benefits of anticipating a post-training test for participants?

A

Participants are more attentive, motivated to learn, and attach more importance to the training.

61
Q

What are declarative learning and procedural learning?

A

Declarative learning refers to the acquisition of facts and information, while procedural learning involves organizing facts and information into smooth behavioral sequences.

62
Q

What are the differences between declarative learning tests and procedural learning tests?

A

Declarative learning tests measure the acquisition of facts and information, often through multiple choice or true/false questions. Procedural learning tests measure the understanding of a sequence of behaviors and are more complex to develop and use.

63
Q

What is the significance of declarative learning in training evaluation?

A

Declarative learning is easy to measure, but has a slight relationship to behaviors. It is the most frequently assessed learning measure in training evaluations.

64
Q

What are some examples of declarative learning tests that are objectively scored?

A

Examples of objectively scored declarative learning tests include multiple choice and true/false questions, where there is only one correct answer and no leeway for the test corrector.

65
Q

What is the significance of procedural learning in training evaluation?

A

Procedural learning is difficult to measure but is significantly related to behaviors and transfer of training, making it important in evaluating the effectiveness of training.

66
Q

What are some examples of procedural learning measures?

A

Examples of procedural learning measures include interviews with task experts, simulations in realistic situations, role plays, and practice sessions. These tests are usually called performance tests or work sample tests.

67
Q

What are the two types of motivation considered in the training context?

A

The two types of motivation are training motivation and motivation to transfer the skill on the job.

68
Q

What is the difference between training motivation and motivation to transfer?

A

Training motivation refers to the direction, intensity, and persistence of learning-directed behavior in training contexts, while motivation to transfer refers to the motivation to apply learned skills on the job after the training is completed.

69
Q

How can motivation to transfer be measured?

A

Motivation to transfer can be measured using expectancy theory, which involves measuring valence (attractiveness of transfer outcomes), instrumentalities (positive or negative consequences of transfer), and expectancies (probability that transfer will result in successful performance).

70
Q

What is self-efficacy in the context of training and development?

A

Self-efficacy refers to the beliefs that trainees hold about their ability to successfully perform the behaviors taught in a training program. It assesses a person’s confidence in engaging in specific behaviors or achieving specific goals.

71
Q

How can self-efficacy be measured in training evaluations?

A

Self-efficacy can be measured by asking trainees to rate the likelihood of obtaining a certain result and their confidence in obtaining that result, or by listing key behaviors and asking trainees to rate their confidence in displaying those behaviors on a confidence scale

72
Q

How can perceived and/or anticipated support be measured in training evaluations?

A

These can be measured by designing specific questions that include the source of support (e.g., supervisor, co-workers, or the organization) and the support (perceived or anticipated) in applying the training content in general and/or in transferring specific aspects of the training program.

73
Q

How can questions be designed to measure perceived and/or anticipated support in training evaluations?

A

Questions can be designed to include the source of the support (e.g., supervisor, co-workers, or the organization) and the support (perceived or anticipated) in applying the training content in general and/or in transferring specific aspects of the training progra

74
Q

What is the significance of the relationship between expected support and transfer in Haccoun and Savard’s study?

A

The relationship between expected support and transfer was negative, meaning that trainees who expected more support than they actually received transferred significantly less. This emphasizes the importance of trainees having realistic expectations of support to facilitate better transfer of skills.

75
Q

What was the main finding of Haccoun and Savard’s study regarding trainees’ support expectations and transfer levels?

A

Trainees whose support expectations matched the support levels they actually received transferred more, while those who expected much support but received little transferred the least. This highlights the importance of aligning trainees’ expectations with actual support levels for better skill transfer.

76
Q

What is the transfer climate and how can it be measured?

A

Transfer climate refers to the training-specific characteristics of the work environment that encourage or discourage trainees to transfer their skills.
It can be measured using a questionnaire developed by Janice Rouiller and Irwin Goldstein, which identifies eight sets of cues

(goal cues, social cues, task and structural cues, positive feedback, negative feedback, punishment, no feedback, and self-control)

that can trigger trainee reactions.

77
Q

What is a continuous learning culture and how does it affect training transfer?

A

A continuous learning culture refers to an organization’s environment that supports the acquisition and application of knowledge, skill, and behavior by encouraging individual, task, and organizational factors. Research indicates that transfer levels are higher in organizations with a stronger learning culture.

78
Q

What is the difference between hard data and soft data in training evaluations?

A

Hard data are objective measures that fall into categories like quantity, quality, time, and costs, which are directly relevant to upper management and assess the bottom line.

Soft data are measures of beliefs, attitudes, and perceptions, usually involving judgments, observations, or perceptions of an outcome, and are important because they are linked to concrete results indirectly.

79
Q

What is return on expectations and how is it used in evaluating training programs?

A

Return on expectations is an alternative method for assessing the impact of training when it is difficult or impossible to adequately assess it directly with hard or soft data.

Stakeholders involved in training decide what they expect from the training, and sometime later, the course managers evaluate whether the performance results align with their expectations.

80
Q

What are the three comparisons involved in all training evaluations?

A

The three comparisons involved in all training evaluations are:

Trainee states relative to a predetermined criterion.
Trainee changes.
Trainees compared to non-trained people.

81
Q

What is the Post-Only Data Collection Design and when is it used?

A

The Post-Only Data Collection Design measures trainees on the relevant variable only after the course is completed. It is used when an organization needs to demonstrate that its employees have attained a predetermined level of proficiency, often for certification purposes.

82
Q

What is the Pre-Post Design and how does it assess training effectiveness?

A

The Pre-Post Design measures trainee attitudes, knowledge, skills, and/or job performance twice, once before (pre) and once after (post) the program is completed. Training effectiveness is inferred when the post-training scores are significantly higher than the pre-training ones, indicating trainee improvement in knowledge, skills, and/or performance.

83
Q

What is the time series design in training evaluation?

A

The time series design is a data collection approach that involves measuring trainee attitudes, knowledge, skills, and/or job performance at multiple time points before and after the training program. This design allows evaluators to assess the lasting effects of training and better understand the changes observed in trainees.

84
Q

What are the four alternative explanations to consider before concluding that a gain in knowledge was caused by a specific training program, using a pre-post design?

A

The four alternative explanations to consider are:

History or Time: Events unrelated to the training that may have influenced the results.
Maturation: Trainees’ natural growth and development over time, which may have contributed to their improved performance.
Testing: The experience of taking the pre-test itself may have influenced trainees’ performance on the post-test.
Mortality: Systematic differences between trainees who completed the program and those who dropped out, which may have affected the results.

85
Q

What is the Time Series Design in training evaluation?

A

The Time Series Design in training evaluation is an extension of the Pre-Post Design, where the same outcome measures are collected several times before and several times after training. This design helps assess the degree to which the improvement persists over time.

86
Q

How does the Time Series Design differ from the Pre-Post Design?

A

In contrast to the Time Series Design, the Pre-Post Design only measures the trainees twice, once before and once after the training program, assessing the change in knowledge, skills, or performance without considering the persistence of the improvement.

87
Q

What are experimental designs in training evaluation?

A

Experimental designs in training evaluation are causal evaluation designs that involve randomly dividing a group of employees into a trained group and a control group (untrained) and statistically comparing their outcomes on relevant post-training measures. Experimental designs aim to prove the effectiveness of the training by isolating its effects from other potential causes.

88
Q

What are quasi-experimental designs in training evaluation?

A

Quasi-experimental designs involve comparing a trained group to another group of employees doing similar jobs in similar circumstances without random assignment. These designs offer a lower level of certainty regarding the cause of differences between the two groups but can be more practical for organizations that find it difficult to fulfill the random assignment required in experimental designs

89
Q

What is the Internal Referencing Strategy (IRS) in training evaluation?

A

The IRS is a compromise evaluation model that allows causal inference without the need for a control/comparison group. It collects data exclusively from the trained group using a pre-post design and measures both relevant and irrelevant but germane outcomes to assess the effectiveness of the training.

90
Q

What are relevant and irrelevant but germane outcomes in the IRS?

A

In the IRS, relevant outcomes are those for which training is provided, while irrelevant but germane outcomes are those related to the subject matter but not covered in the training. Causality is proved when there’s a statistically larger pre-post change for the relevant outcomes compared to the irrelevant but germane outcomes.

91
Q

How does the IRS compare to experimental and quasi-experimental designs in terms of causal inference?

A

Research has shown that the IRS provides inferences equivalent to those drawn from the more complex pre-post experimental or quasi-experimental models, making it a useful and relatively simple tool to improve the common pre-post design in training evaluation.

92
Q

What are the trade-offs in choosing evaluation strategies and data collection approaches?

A

Trade-offs involve balancing the quality and scope of information gathered against costs and practicality.

93
Q

What are the two main objectives of training evaluation?

A

To assess whether the trainee has attained a specified level on particular outcomes or trainee changes.

94
Q

What determines whether a post-only or pre-post design is required?

A

The objective of the evaluation (attaining a specified level on outcomes or trainee changes) determines the design choice

95
Q

What dictates the choice between a causal or non-causal data collection design?

A

The need to establish whether the training experience has caused the outcome determines the choice.

96
Q

When is Kirkpatrick’s model used in evaluation?

A

Kirkpatrick’s model is used for summative evaluations (assessing if the program met its objectives)

97
Q

What models are more appropriate for formative evaluations?

A

COMA and LTSI models are more appropriate for formative evaluations (focusing on program improvement).

98
Q

What factors influence the type and source of data in evaluation?

A

Factors include the availability of performance indicators, whether they are objective or perceptual, and the method of data collection (individual or collective).

99
Q

What are the advantages and disadvantages of using questionnaires, interviews, and focus groups in data collection?

A

Questionnaires provide a standardized format but may lack depth; interviews offer richer information but are time-consuming; focus groups enable group interaction but may be influenced by group dynamics.

100
Q

What are the limitations of descriptive and causal data collection designs?

A

Descriptive designs are easier and more practical but cannot prove effectiveness, while causal designs offer stronger evidence but are more complex and costly.

101
Q

How do evaluation choices impact the legitimacy of conclusions reached?

A

The quality of information and the strength of conclusions are affected by the chosen evaluation model and data collection design, which may compromise the legitimacy of results if lower-quality designs are used.