Chapter 8 Flashcards
evaluating effectiveness of training and development
is a set of planned, information gathering and analytical activities undertaken by ETD practitioners to provide those responsible for the management of the strategic HRD effort with an assessment of ETD interventions’ quality and impact
Evaluations of the effectiveness of learning programmes are usually done to provide information and influence a decision that has to
be made about the HRD strategy, practices and procedures
Evaluation provides diagnostic
information that shows where remedial actions should be undertaken and whether the
L&D intervention should be continued
validity vs reliability
Validity is the extent to which the measuring instrument reflects the concept it is intended to measure.
Reliability is the extent to which scores obtained on a measure are reproducible in repeated
administrations under similar measurement conditions
different types of evaluation
Similar to assessment evaluation can take place at different times: 1. diagnostic evaluation - before a training (learning) intervention
2. formative evaluation - during a training intervention
3. summative evaluation - at the conclusion of a learning programme
4. longitudinal evaluation - after a learning programme
comparing assessment and evaluation
Assessment focuses on evaluating collected
evidence of learners’ achievements against a set standard.
Evaluation, on the other hand, makes judgements about the quality and added value of learning programmes and whether changes and/or improvements in learners’ performance in the workplace occurred as a result of the learning programme
advantages of L&D training evaluation
- measures the difference what was required and what has been achieved
- justifies HRD budget
- improves design and delivery of learning programmes
- improves transfer of learner
- identifies unnecessary or ineffective programs
- improves the credibility of HRD
- meets the needs of management and gains their support
- shows the financial return on training
timing of evaluation and its related purpose
- diagnostic evaluation
- design of the program
- existing skills level as part of needs analysis - formative evaluation
- quality of delivery process
- adequacy of learning material
- appropriateness of delivery methods - summative evaluation
- satisfaction of learners with program
- achievement of outcomes
- overall effectiveness - longitudinal evaluation
- transfer and application of learning in workplace
- support for new knowledge, skills and attitudes in workplace
two types of evaluation
- compliance
- value-added
compliance evaluation
- in terms of national standards = compulsory
- organisation’s compliance with international and national quality standards for outcomes-based L&D practices
steps of compliance evaluation
- L&D providers first conduct a self-evaluation of their L&D practices, procedures and processes against QCTO or professional body
- second step = arrange a peer evaluation by external quality reviewers
- prepare for and undergo a formal quality audit by QCTO or relevant standards body
activities included in compliance evaluation step 3
- clarifying and describing customer expectations and needs
- ensure required resources are available for design, delivery and management
- ensure providers, practitioners, assessors and moderators have necessary skills, knowledge and motivation
- ensure quality assurance evaluation systems are in place to monitor design, delivery, management and evaluation of qualifications
- ensure that the independent auditing and monitoring of quality and feedback systems by providers of learning and learning programs as well as other stakeholders
value added evaluation
- concerned with organisation’s bottom line
- measure cost-effectiveness of L&D interventions
- involve continuing, expanding or eliminating learning program
evaluation criteria used in value added evaluation
- learning programme design
- training delivery
- competence
- transfer of learning
- impact on the performance of the organisation
- learning programme design
design evaluated in terms of
content, training methods
and the physical design of the
curriculum and learning
materials
does not indicate if training method is effective
if content is valid and if training program is conducted by qualified proffesional - learning programme = succesful
learning intervention delivery
evaluation of the quality
based on learner satisfaction and sound learning facilitation process
evaluation indicated the administrative and support process related to program
competence
relates to the quality of the assessment process, methods and instruments
to what extent is outcomes achieved
main criterion is if learner demonstrated mastery of learning outcome
transfer of learning
extent to which acquired competencies are transferred to the workplace is evaluated
does the workplace support the transfer of competence acquired during learning process
impact on the performance of the organisation
how learning programme and accompany transfer of K,S,A,B affect performance of organisation
measuring value added in L&D
Measurement focuses staff on important issues. A measurement system is a management decision-making tool that helps to prioritise tasks. Measurement also shows L&D
professionals the return on investment in a learning programme
Measurement clarifies expectations. Once the objectives for L&D interventions are set in terms of cost, time, quality, quantity and stakeholder satisfaction, HRD staff and
L&D professionals understand what is expected of them
Measurement involves, encourages and fosters creativity. Once a measurement system is in place, staff tend to compete to meet or exceed the objectives
Measurement brings the HRD function closer to departments. The L&D measurement
system should include factors that relate to quality, productivity, services and
profitability within the organisation.
Measurement improves HRD management and control. If L&D professionals and
managers measure the value added by L&D initiatives, they can manage it. If they can
manage it, they can improve it
criteria for value-added
(three measures of training)
- cost
- change
- impact
learning intervention evaluaition
Process to determine whether the learning intervention has achieved its goals in the most effective & efficient manner possible
Information to influence decisions about learning interventions & performance solutions to improve the organisation as a whole
Continuous process
Systematic process of making judgements ito the quality of the programme wrt 1)effectiveness & 2)efficiency of the T&D intervention
Measurement tools need to supply
1) Valid data ( the accuracy of a measure)
2) Reliable data ( the consistency of a measure)
advantages of learning evaluation
ROI
Feedback system to improve the design
Improves transfer of learning
Improves the credibility of learning programmes
Measures difference between training need & what has been achieved
predictive evaluation
an alternate and integrated approach to evaluation that
provides data to management which focuses on predicting the success of training/learning
intervention in three areas;
three areas of predictive evaluation
(1) intention – examining whether the participants’
goals and beliefs are aligned with anticipated goals upon completion of the programme;
(2) adoption of behaviours – examining the degree to which the training has been implemented on the job and successfully incorporated into the work behaviours of the participants; and
(3) impact on business results – measuring to see if success has been achieved.
steps followed in predictive evaluation
- Choose the course to be evaluated, either a new course design or an existing course.
- Review this comprehensively: understand what it is, what it is supposed to achieve, processes followed to deliver the content, the business issues it addresses, who attends, how this takes place, nature of pre-course preparations, type of post-course support mechanisms (if any), the course sponsors, and what they consider to be the purpose of the course.
- Constitute a committee to predict the value of the training, create an impact matrix
(the key features the programme is expected to impact) and present these to the key decision-makers; - Evaluate intention by monitoring the course during the pilot and following sessions,
and make adjustments where necessary. - Evaluate adoption, that is, the degree to which participants have adopted the changes
in behaviours promulgated by the course. - Evaluate impact by determining the degree to which the programme has impacted on
the business results of the organisation