Lesson 11: Training Evaluation Flashcards
What is training evaluation?
Training evaluation is a process concerned with assessing the value of training programs to employees and organizations, using various techniques to gather objective and subjective information before, during, and/or after training.
What is the training evaluation continuum?
The training evaluation continuum ranges from simple evaluations focusing on trainee reactions to more elaborate procedures that assess learning, motivation, confidence, and the work environment’s support for new skills.
Why do organizations conduct training evaluations?
Training evaluations help fulfill managerial responsibility to improve training, identify useful training programs and trainees, determine cost benefits, ascertain program results, diagnose strengths and weaknesses, and justify the value and credibility of the training function.
As of the 2000s, about 50% of organizations conduct evaluations, with most focusing on easily measured reactions and learning. Organizations with stronger learning cultures conduct more evaluations and use more sophisticated techniques.
What is the paradox in training evaluations?
The paradox is that while improving individual and organizational performance is the central objective of training for organizations, these aspects are the least frequently evaluated.
What are the two categories of barriers to training evaluation?
The two categories of barriers to training evaluation are pragmatic and political barriers.
What are the main pragmatic barriers to training evaluation?
Pragmatic barriers include the perceived complexity of evaluation models and techniques, the time and effort required for data gathering and analysis, and the costs associated with evaluation.
How has modern information technology affected training evaluation?
Modern information technology, such as web-based questionnaires and computerized work-performance data, has made it easier and cheaper to conduct high-level evaluations than ever before
What are the main political barriers to training evaluation?
Political barriers include concerns about conflict of interest, the fear of revealing ineffective training programs or approaches, and the lack of accountability for training results among trainees, their managers, and training program administrators.
How can the issue of accountability affect training evaluation?
When trainees, their managers, and those who develop and administer training programs are more accountable for results, training will serve organizational success more clearly.
However, the current lack of accountability may lead to good programs being dropped and poor ones perpetuated, which is a disservice to the training function and the organization.
What are the different types of training evaluations based on the data gathered and analyzed?
The different types of training evaluations based on the data gathered and analyzed are:
Trainee perceptions evaluation
Behavioral data evaluation
Evaluation of psychological states
Evaluation of work environment
What is the focus of most training evaluations?
The focus of most training evaluations is on trainee perceptions.
What is the purpose of more complete evaluations?
The purpose of more complete evaluations is to assess the extent of trainee learning and the post-training behaviors of trainees.
What are the psychological states that affect learning and behavior change?
The psychological states that affect learning and behavior change are:
Affective state
Cognitive state
Skills-based state
How is the work environment evaluated in training evaluations?
The work environment is evaluated in training evaluations by assessing the transfer climate and learning cultures. Understanding the organization’s culture, climate, and policies can strongly affect training choices and effectiveness.
What factors influence training success?
The factors that influence training success are:
Opportunities for on-the-job practice of new skills
Level of support provided by others to new learners
Alignment of training courses with the firm’s strategic vision
Improvement in the performance of participants whose remuneration depends on performance.
What is the difference between formative and summative evaluations?
The difference between formative and summative evaluations is:
Formative evaluations are designed to assess the value of the training materials and processes with the goal of identifying improvements to the instructional experience.
Summative evaluations are designed to provide data about a training program’s worthiness or effectiveness.
Who are formative evaluations of special interest to?
Formative evaluations are of special interest to training designers and instructors.
Who are summative evaluations of greatest interest to?
Summative evaluations are of greatest interest to senior management.
What is the difference between descriptive and causal evaluations?
The difference between descriptive and causal evaluations is:
Descriptive evaluations provide information describing trainees once they have completed the program.
Causal evaluations are used to determine whether the training caused the post-training learning and/or behaviors.
What kind of data gathering and statistical procedures do causal evaluations require?
Causal evaluations require more complex data gathering and statistical procedures.
Are causal evaluations frequently used?
Causal evaluations are infrequently used.
What is the Kirkpatrick model of training evaluation?
The Kirkpatrick model of training evaluation is a hierarchical model that identifies four levels to assess training: reactions, learning, behavior, and results.
What does the Kirkpatrick model suggest about the relationship between the four levels?
The Kirkpatrick model suggests that each level has a causal link to the next level. Success at a particular level causes success at the next one.
What is the fifth level added to the Kirkpatrick model in a more recent articulation?
The fifth level added to the Kirkpatrick model in a more recent articulation is return on investment (ROI).
What is the purpose of measuring trainee reactions in the Kirkpatrick model?
The purpose of measuring trainee reactions in the Kirkpatrick model is to assess the value of the training materials and processes with the key goal of identifying improvements to the instructional experience.
What is the limitation of the Kirkpatrick model for formative evaluations?
The limitation of the Kirkpatrick model for formative evaluations is that the relationship between reactions, learning, and behavior is very small, so improving Level 1 (reactions) or Level 2 (learning) is unlikely to improve the impact of training at the behavior (transfer) level.
What are some alternative evaluation models to the Kirkpatrick model?
Some alternative evaluation models to the Kirkpatrick model are the COMA model, the Decision-Based Evaluation model, and the Learning Transfer System Inventory.
What is the COMA model of training evaluation?
The COMA model is a formative evaluation model that enhances the usefulness of training evaluation questionnaires by identifying and measuring variables that research has shown to be important for the transfer of training. These variables fall into four categories: cognitive, organizational environment, motivational, and attitudinal variables.
What are the four categories of variables in the COMA model?
The four categories of variables in the COMA model are cognitive variables, organizational environment variables, motivational variables, and attitudinal variables.
What are cognitive variables in the COMA model?
Cognitive variables in the COMA model refer to the level of learning that the trainee has gained from a training program. Both declarative and procedural learning might be measured, but the latter is more important because it is more strongly related to transfer than the former.
What are organizational environment variables in the COMA model?
Organizational environment variables in the COMA model refer to a cluster of variables that are generated by the work environment and that impact transfer of training. These include the learning culture, the opportunity to practice, the degree of support that is expected, and the level of support actually provided to trainees once they return to the job.
What are motivational variables in the COMA model?
Motivational variables in the COMA model refer to the desire to learn and to apply the learned skill on the job. COMA suggests that training motivation (measured at the onset of the program) and motivation to transfer (measured immediately after) both be measured.
What is the purpose of using the COMA model for training evaluation?
The purpose of using the COMA model for training evaluation is to assess the degree to which trainees have mastered the skills, perceive the degree to which the organizational environment will support and help them apply the skills, are motivated to learn and to apply the skills on the job, and have developed attitudes and beliefs that allow them to feel capable of applying their newly acquired skills on the job.
What are some limitations of the COMA model?
Some limitations of the COMA model are that it is relatively new, it is focused exclusively on an analysis of the factors that affect transfer, it is not well-suited for summative evaluation purposes, and different questionnaires must be constructed for different training programs.
What is the Decision-Based Evaluation model?
The Decision-Based Evaluation (DBE) model is a training evaluation model developed by Kurt Kraiger that requires evaluators to select their evaluation techniques and variables based on the decisions needed.
It specifies three potential “targets” for the evaluation: trainee change, organizational payoff, and program improvement.
The model also suggests identifying the focus of the evaluation, which can include different variables depending on the target.
Finally, the appropriate data collection method is suggested based on the focus of the evaluation.
How does DBE differ from Kirkpatrick’s and COMA models?
Unlike Kirkpatrick’s model and COMA, DBE allows for different variables to be measured depending on the goals of the evaluation. DBE is also more flexible and can be used for both formative and summative evaluations. DBE is the only training evaluation model that specifies key questions to guide evaluations, such as “What do we choose to evaluate?” and “How can we do so?”
What are the potential “targets” for the evaluation in the DBE model?
The DBE model specifies three potential “targets” for the evaluation: trainee change, organizational payoff, and program improvement.
What is the focus of the evaluation in the DBE model?
The focus of the evaluation in the DBE model can include different variables depending on the target of the evaluation. For example, the focus may be on assessing the level of trainee changes with respect to learning behaviors or psychological states such as motivation and self-efficacy.
What is the Learning Transfer System Inventory (LTSI)?
The Learning Transfer System Inventory (LTSI) is a more generic approach to training evaluation proposed by Elwood Holton and colleagues. It aims to alleviate the constraint on training evaluation in organizations where specialized resources are not always available.
The LTSI is a questionnaire that assesses 16 variables important for the transfer of training, including all of the COMA dimensions plus additional ones such as learner readiness, resistance/openness to change, and opportunity to use learning.
The LTSI questionnaire contains 89 questions.
How many questions are included in the LTSI questionnaire?
The LTSI questionnaire contains 89 questions.