2.8 - Evaluating Impact Flashcards
While selecting the measurement process, consider the:
- nature of the solution
- characteristics of the learners
- focus of the outcome.
The Evaluation Process (steps)
- Use the assessment data to identify evaluation outcomes and goals.
- Develop an evaluation design and strategy.
- Select and construct measurement tools.
- Analyze data.
- Report data.
Purpose of Evaluating Talent Development Solutions
- Determine business impact, cost-benefit ratio, and ROI
- Determine whether objectives were met and how well
- Assess the effectiveness and appropriateness of content and instructional strategies.
- Reinforce learning by using a test or other performance assessment.
- Provide feedback to the facilitator.
- Provide feedback to participants about what they learned.
- Assess on-the-job environment to support learning retention.
Benefits of evaluating talent development solutions
- Secures client support and confidence to build relationships.
- Measures if program results are consistent with the opportunity analysis and needs assessment.
- Validates performance gaps and learner needs.
- Determines whether training is the solution to a performance gap.
- Helps management meet its org objectives.
Ralph Tyler’s Goal Attainment Method
Tyler’s design process incorporates evaluation based on objectives. Primarily used for curriculum design.
Tyler’s model poses four questions:
- What objectives should the learner achieve?
- What learning objectives will assist learners to achieve these objectives?
- How should the curriculum be organized?
- How should learner achievement be evaluated?
Formative Evaluation
Formative evaluation occurs throughout the design of any talent development solution.
- It aims at improving the draft learning program.
- It includes pilot tests, beta tests, technical reviews with subject matter experts (SMEs), production reviews, and stakeholder reviews.
Summative Evaluation
− Summative evaluation occurs after a talent development solution has been delivered.
- It focuses on the results or impact of the talent development solution provide evidence about the value of a program.
- Includes Standardized Tests, participant reaction forms, stakeholder satisfaction surveys, and final return on investment (ROI).
Program Evaluation
− Program evaluation is the systematic assessment of program results and, if possible, the assessment of how the program caused them.
− Results may occur at several levels: reaction to the program, what was learned, what was transferred to the job, and the impact on the organization.
Learning Transfer Evaluation
Learning transfer evaluation measures the learner’s ability to use what they’ve learned on the job.
The Brinkerhoff Success Case Method (SCM)
− The SCM involves identifying the most and least successful cases in a program and examining them in detail.
− Key steps in this method are:
- focusing and planning a success case study
- creating an “impact model” that defines what success should look like
- designing and implementing a survey to search for best and worst cases
- interviewing and documenting success cases
- communicating findings, conclusions, and recommendations.
Balanced Scorecard Approach
− The balanced scorecard approach is a way for organizations to evaluate effectiveness with more than financial measures.
− This model consists of measuring effectiveness from four perspectives
- The customer perspective
- The innovation and learning perspective
o The internal business perspective
o The financial perspective
Steps to create data collection tools
To develop evaluation instruments, talent development professionals should determine:
- the purpose the tool will serve
- the format or media to be used to present and track results
- the ranking or rating scale to be used
- the demographics needed
- how to capture comments and suggestions
- the degree of flexibility the tool needs
- how the tool will be distributed
- the timeframe
- how the results will be tracked, monitored, and reported
- how the results will be communicated
- how to reach a high level of return.
Construct validity
Construct validity evaluates whether a measurement tool really represents the thing we are interested in measuring. It’s central to establishing the overall validity of a method.
(A construct refers to a concept or characteristic that can’t be directly observed, but can be measured by observing other indicators that are associated with it.)
Content validity
Content validity assesses whether a test is representative of all aspects of the construct.
To produce valid results, the content of a test, survey or measurement method must cover all relevant parts of the subject it aims to measure. If some aspects are missing from the measurement (or if irrelevant aspects are included), the validity is threatened.
Criterion Validity
Criterion validity evaluates how well a test can predict a concrete outcome, or how well the results of your test approximate the results of another test.