Chapter 8 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

evaluating effectiveness of training and development

A

is a set of planned, information gathering and analytical activities undertaken by ETD practitioners to provide those responsible for the management of the strategic HRD effort with an assessment of ETD interventions’ quality and impact

Evaluations of the effectiveness of learning programmes are usually done to provide information and influence a decision that has to
be made about the HRD strategy, practices and procedures

Evaluation provides diagnostic
information that shows where remedial actions should be undertaken and whether the
L&D intervention should be continued

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

validity vs reliability

A

Validity is the extent to which the measuring instrument reflects the concept it is intended to measure.

Reliability is the extent to which scores obtained on a measure are reproducible in repeated
administrations under similar measurement conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

different types of evaluation

A

Similar to assessment evaluation can take place at different times: 1. diagnostic evaluation - before a training (learning) intervention
2. formative evaluation - during a training intervention
3. summative evaluation - at the conclusion of a learning programme
4. longitudinal evaluation - after a learning programme

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

comparing assessment and evaluation

A

Assessment focuses on evaluating collected
evidence of learners’ achievements against a set standard.

Evaluation, on the other hand, makes judgements about the quality and added value of learning programmes and whether changes and/or improvements in learners’ performance in the workplace occurred as a result of the learning programme

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

advantages of L&D training evaluation

A
  • measures the difference what was required and what has been achieved
  • justifies HRD budget
  • improves design and delivery of learning programmes
  • improves transfer of learner
  • identifies unnecessary or ineffective programs
  • improves the credibility of HRD
  • meets the needs of management and gains their support
  • shows the financial return on training
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

timing of evaluation and its related purpose

A
  1. diagnostic evaluation
    - design of the program
    - existing skills level as part of needs analysis
  2. formative evaluation
    - quality of delivery process
    - adequacy of learning material
    - appropriateness of delivery methods
  3. summative evaluation
    - satisfaction of learners with program
    - achievement of outcomes
    - overall effectiveness
  4. longitudinal evaluation
    - transfer and application of learning in workplace
    - support for new knowledge, skills and attitudes in workplace
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

two types of evaluation

A
  1. compliance
  2. value-added
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

compliance evaluation

A
  • in terms of national standards = compulsory
  • organisation’s compliance with international and national quality standards for outcomes-based L&D practices
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

steps of compliance evaluation

A
  1. L&D providers first conduct a self-evaluation of their L&D practices, procedures and processes against QCTO or professional body
  2. second step = arrange a peer evaluation by external quality reviewers
  3. prepare for and undergo a formal quality audit by QCTO or relevant standards body
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

activities included in compliance evaluation step 3

A
  • clarifying and describing customer expectations and needs
  • ensure required resources are available for design, delivery and management
  • ensure providers, practitioners, assessors and moderators have necessary skills, knowledge and motivation
  • ensure quality assurance evaluation systems are in place to monitor design, delivery, management and evaluation of qualifications
  • ensure that the independent auditing and monitoring of quality and feedback systems by providers of learning and learning programs as well as other stakeholders
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

value added evaluation

A
  • concerned with organisation’s bottom line
  • measure cost-effectiveness of L&D interventions
  • involve continuing, expanding or eliminating learning program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

evaluation criteria used in value added evaluation

A
  • learning programme design
  • training delivery
  • competence
  • transfer of learning
  • impact on the performance of the organisation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  • learning programme design
A

design evaluated in terms of
content, training methods
and the physical design of the
curriculum and learning
materials

does not indicate if training method is effective

if content is valid and if training program is conducted by qualified proffesional - learning programme = succesful

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

learning intervention delivery

A

evaluation of the quality

based on learner satisfaction and sound learning facilitation process

evaluation indicated the administrative and support process related to program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

competence

A

relates to the quality of the assessment process, methods and instruments

to what extent is outcomes achieved

main criterion is if learner demonstrated mastery of learning outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

transfer of learning

A

extent to which acquired competencies are transferred to the workplace is evaluated

does the workplace support the transfer of competence acquired during learning process

17
Q

impact on the performance of the organisation

A

how learning programme and accompany transfer of K,S,A,B affect performance of organisation

18
Q

measuring value added in L&D

A

Measurement focuses staff on important issues. A measurement system is a management decision-making tool that helps to prioritise tasks. Measurement also shows L&D
professionals the return on investment in a learning programme

Measurement clarifies expectations. Once the objectives for L&D interventions are set in terms of cost, time, quality, quantity and stakeholder satisfaction, HRD staff and
L&D professionals understand what is expected of them

Measurement involves, encourages and fosters creativity. Once a measurement system is in place, staff tend to compete to meet or exceed the objectives

Measurement brings the HRD function closer to departments. The L&D measurement
system should include factors that relate to quality, productivity, services and
profitability within the organisation.

Measurement improves HRD management and control. If L&D professionals and
managers measure the value added by L&D initiatives, they can manage it. If they can
manage it, they can improve it

19
Q

criteria for value-added
(three measures of training)

A
  • cost
  • change
  • impact
20
Q

learning intervention evaluaition

A

Process to determine whether the learning intervention has achieved its goals in the most effective & efficient manner possible

Information to influence decisions about learning interventions & performance solutions to improve the organisation as a whole

Continuous process

Systematic process of making judgements ito the quality of the programme wrt 1)effectiveness & 2)efficiency of the T&D intervention

Measurement tools need to supply
1) Valid data ( the accuracy of a measure)
2) Reliable data ( the consistency of a measure)

21
Q

advantages of learning evaluation

A

ROI

Feedback system to improve the design

Improves transfer of learning

Improves the credibility of learning programmes

Measures difference between training need & what has been achieved

22
Q

predictive evaluation

A

an alternate and integrated approach to evaluation that
provides data to management which focuses on predicting the success of training/learning
intervention in three areas;

23
Q

three areas of predictive evaluation

A

(1) intention – examining whether the participants’
goals and beliefs are aligned with anticipated goals upon completion of the programme;

(2) adoption of behaviours – examining the degree to which the training has been implemented on the job and successfully incorporated into the work behaviours of the participants; and

(3) impact on business results – measuring to see if success has been achieved.

24
Q

steps followed in predictive evaluation

A
  1. Choose the course to be evaluated, either a new course design or an existing course.
  2. Review this comprehensively: understand what it is, what it is supposed to achieve, processes followed to deliver the content, the business issues it addresses, who attends, how this takes place, nature of pre-course preparations, type of post-course support mechanisms (if any), the course sponsors, and what they consider to be the purpose of the course.
  3. Constitute a committee to predict the value of the training, create an impact matrix
    (the key features the programme is expected to impact) and present these to the key decision-makers;
  4. Evaluate intention by monitoring the course during the pilot and following sessions,
    and make adjustments where necessary.
  5. Evaluate adoption, that is, the degree to which participants have adopted the changes
    in behaviours promulgated by the course.
  6. Evaluate impact by determining the degree to which the programme has impacted on
    the business results of the organisation
25
Q

training evaluation process

A

formal, professional evaluation of learning program - is systematic

relies on skills:
- planning skills
- conducting evaluation
- communicating outcomes

26
Q

krikpatricks model

A

level 1: reaction

level 2: learning

level 3: transfer

level 4: results

27
Q

why organisations avoid doing evaluations

A
  • Barriers to evaluation and problems experienced with the evaluation process.
  • Top management may not emphasise the importance of evaluation and may simply
    accept that training is valuable and effective.
  • Managers of HRD departments may not have the necessary skills to conduct evaluations;
    they avoid this aspect of the HRD process.
  • There may be uncertainty about what exactly should be evaluated. Owing to the wide
    range of models, each with a different emphasis, it is often difficult to decide on what
    to evaluate.
  • Evaluation may be viewed as risky and expensive. It is often felt that the costs outweigh
    the benefits. There may also be a fear that an evaluation may bring negative attention to
    the HRD department
28
Q

number of issues that cause organisations to do no evaluations of superficial evaluations

A
  • Too many models and theories. There are many theories and models for evaluation. This is confusing, as the different models and theories focus on a wide variety of issues.
  • The complexity of models. Models and theories tend to be complex and contain many
    variables. This makes it difficult for the average L&D professional to use them.
  • A general lack of understanding about evaluation.
  • The lack of research skills. Effective evaluation involves a research process. This implies
    that L&D professionals should have research skills; this is not always the case.
  • There is difficulty in identifying the impact of training on specific variables. It is often difficult to identify specific variables or the specific impact that a training intervention has on an organisation.
  • Evaluation is considered to be a post-programme activity. Most evaluation focuses on the
    end results of training programmes, rather than on the process.
  • Managers do not see the long-term advantages. Evaluation is often aimed at individual
    programmes and interventions, rather than at the overall training and development
    functions.
  • There is little support from the main stakeholders. Managers and other stakeholders often
    see evaluation as a ‘nice-to-have’ rather than a ‘must-have’.
  • Evaluation is not focused on management needs. Evaluation data often focus on the
    learners’ needs rather than on those of management.
  • Evaluation data are not used appropriately. Evaluation results are not used at all, do not
    reach the appropriate stakeholders, are not used to bring about improvements, or are used for political or disciplinary purposes.
  • Inconsistent use. Evaluation will not be taken seriously if it is used in an inconsistent way
    across different learning programmes.
  • No clear standards. No consistent standards exist for evaluation in terms of the process, methods and techniques.
  • Lack of sustainability. Evaluations tend to be short-term processes aimed at specific
    goals, rather than at strategic long-term processes