2.8 - Evaluating Impact Flashcards

1
Q

While selecting the measurement process, consider the:

A
  1. nature of the solution
  2. characteristics of the learners
  3. focus of the outcome.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The Evaluation Process (steps)

A
  1. Use the assessment data to identify evaluation outcomes and goals.
  2. Develop an evaluation design and strategy.
  3. Select and construct measurement tools.
  4. Analyze data.
  5. Report data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Purpose of Evaluating Talent Development Solutions

A
  1. Determine business impact, cost-benefit ratio, and ROI
  2. Determine whether objectives were met and how well
  3. Assess the effectiveness and appropriateness of content and instructional strategies.
  4. Reinforce learning by using a test or other performance assessment.
  5. Provide feedback to the facilitator.
  6. Provide feedback to participants about what they learned.
  7. Assess on-the-job environment to support learning retention.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Benefits of evaluating talent development solutions

A
  1. Secures client support and confidence to build relationships.
  2. Measures if program results are consistent with the opportunity analysis and needs assessment.
  3. Validates performance gaps and learner needs.
  4. Determines whether training is the solution to a performance gap.
  5. Helps management meet its org objectives.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Ralph Tyler’s Goal Attainment Method

A

Tyler’s design process incorporates evaluation based on objectives. Primarily used for curriculum design.

Tyler’s model poses four questions:

  1. What objectives should the learner achieve?
  2. What learning objectives will assist learners to achieve these objectives?
  3. How should the curriculum be organized?
  4. How should learner achievement be evaluated?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Formative Evaluation

A

Formative evaluation occurs throughout the design of any talent development solution.

  • It aims at improving the draft learning program.
  • It includes pilot tests, beta tests, technical reviews with subject matter experts (SMEs), production reviews, and stakeholder reviews.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Summative Evaluation

A

− Summative evaluation occurs after a talent development solution has been delivered.

  • It focuses on the results or impact of the talent development solution provide evidence about the value of a program.
  • Includes Standardized Tests, participant reaction forms, stakeholder satisfaction surveys, and final return on investment (ROI).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Program Evaluation

A

− Program evaluation is the systematic assessment of program results and, if possible, the assessment of how the program caused them.
− Results may occur at several levels: reaction to the program, what was learned, what was transferred to the job, and the impact on the organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Learning Transfer Evaluation

A

Learning transfer evaluation measures the learner’s ability to use what they’ve learned on the job.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The Brinkerhoff Success Case Method (SCM)

A

− The SCM involves identifying the most and least successful cases in a program and examining them in detail.
− Key steps in this method are:
- focusing and planning a success case study
- creating an “impact model” that defines what success should look like
- designing and implementing a survey to search for best and worst cases
- interviewing and documenting success cases
- communicating findings, conclusions, and recommendations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Balanced Scorecard Approach

A

− The balanced scorecard approach is a way for organizations to evaluate effectiveness with more than financial measures.
− This model consists of measuring effectiveness from four perspectives
- The customer perspective
- The innovation and learning perspective
o The internal business perspective
o The financial perspective

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Steps to create data collection tools

A

To develop evaluation instruments, talent development professionals should determine:

  • the purpose the tool will serve
  • the format or media to be used to present and track results
  • the ranking or rating scale to be used
  • the demographics needed
  • how to capture comments and suggestions
  • the degree of flexibility the tool needs
  • how the tool will be distributed
  • the timeframe
  • how the results will be tracked, monitored, and reported
  • how the results will be communicated
  • how to reach a high level of return.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Construct validity

A

Construct validity evaluates whether a measurement tool really represents the thing we are interested in measuring. It’s central to establishing the overall validity of a method.

(A construct refers to a concept or characteristic that can’t be directly observed, but can be measured by observing other indicators that are associated with it.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Content validity

A

Content validity assesses whether a test is representative of all aspects of the construct.

To produce valid results, the content of a test, survey or measurement method must cover all relevant parts of the subject it aims to measure. If some aspects are missing from the measurement (or if irrelevant aspects are included), the validity is threatened.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Criterion Validity

A

Criterion validity evaluates how well a test can predict a concrete outcome, or how well the results of your test approximate the results of another test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Concurrent Validity

A

Concurrent validity is the extent to which an instrument agrees with the results of other instruments administered at approximately the same time to measure the same characteristics.

17
Q

Predictive Validity

A

Predictive validity is the extent to which an instrument can predict future behaviors or results.

18
Q

Split-half reliability

A

Split-half reliability is a way to test reliability in which one test is split into two shorter ones.

19
Q

Test-retest check of reliability

A

Test–retest check of reliability is an approach in which the same test is administered twice to the same group of people. The scores are then compared.

(timing is a critical issue in a test–retest check: If the period between tests is too short, a participant could simply remember the questions.)

20
Q

Reliability

A

Reliability is the ability of the same measurement to produce consistent results over time.

21
Q

Considerations when creating surveys, questionnaires, or interview evaluation instruments

A
  • Be certain that the questions are directly connected to the measurement plan
  • Determine whether any definitions or other standards exist that need to be clarified
  • Decide whether reading ability or a second language is a concern
  • Explore whether to use a pilot test on the instrument.
22
Q

Types of Data Collection Tools

A
− Surveys and questionnaires 
− Analytics from technology platforms 
− Examinations, assessments, and tests 
− Self-evaluations 
− Simulations and observations 
− Archival or extant data
23
Q

Steps to developing an evaluation strategy

A

− Know how to design research methods.
− Determine which results to measure and how to measure.
− Identify the business drivers and performance needs.
− Choose the evaluation methods.

24
Q

Analysis Methods

A
− Return on investment (ROI) analysis 
− Cost-benefit analysis 
− Benefit-cost ratio 
− Utility analysis 
− Forecasting