lecture 13 Flashcards
Social intervention
An action taken – within a
particular social context – with the goal of producing
an intended outcome.
Evaluation research
Research undertaken for the
purpose of determining the impact of some social
intervention.
mission of the second research
To develop effective
leaders for the Canadian
agri-food industry.
CALL objectives
- Develop a pool of excellent leaders to serve national
and regional organizations of importance to
Canadian agriculture; - Create a network of leaders able to link diverse
sectors and regions within the Canadian agri-food
industry; - Create a network of leaders who will provide vision,
advocacy, and leadership for the Canadian agri-food
industry; and - Create a network of leaders who will, through their
impact as educators, mentors and role models,
improve production and farm business
management practices across Canada
Who took part?
16 men and 14 women.
From all ten Canadian provinces.
From just under thirty to just over fifty years of age.
Two-thirds were farmers.
One-third employed in non-farm agri-businesses and
non-governmental organizations.
Wealth of experience with agricultural leadership in
farms, agri-businesses, non-governmental
organizations and rural communities.
The Kirkpatrick framework
Reaction
— Learning
— Behavior
— Results
Level 1: Reaction
Frequently referred to as happy
face evaluation, this level measures participant
reaction to, and satisfaction with, the program and
the learning environment
Level 2: Learning
Changes in knowledge, skills,
and/or attitudes constitute learning in the Kirkpatrick
model.
Level 3: Behavior
this level determines whether
changes in behavior have occurred as a result of the
program
Level 4: Results
Level 4 looks at the final results that
occurred because the participants attended the
program. Results can be thought of as “the bottom
line,“ or the impact of the program.
Seminar evaluation instruments
- Participants were asked to identify the greatest
strength(s) of the seminar, taking into consideration
its overall structure and content. - Participants were asked to rate the
accomplishment of each of the learning objectives
of the seminar on a scale from one (poor) to ten
(outstanding). - Participants were asked to rate, on a scale from one
(poor) to ten (outstanding), four aspects of the
seminar:
a. preparation via computer conferencing;
b. overall content;
c. organization and logistics;
d. accommodation, meals and meeting spaces. - Participants were asked to provide any comments
they may have had regarding the seminar, with a
view to improving future seminars.
Participants were asked to rate, on a scale from one
(poor) to ten (outstanding), four aspects of the
seminar:
a. preparation via computer conferencing;
b. overall content;
c. organization and logistics;
d. accommodation, meals and meeting spaces.
seminar evaluation summary
Primarily aimed at gauging the satisfaction of
participants with leadership development seminars.
Provided a self-assessment of learning achieved.
Participants were not examined
detailed questionnaire
open and closed questions
Standardized leadership development instrument
Developed by James Kouzes and Barry Posner (1997)
as a companion to their 1995 textbook, The
Leadership Challenge.
This text was required reading for CALL participants.
Leadership Practices Inventory: purposes
Pedagogy: a means of encouraging participants to
reflect about their leadership practices.
Peer assessment: one way to move beyond purely
self-reported assessments in our research.
Pre-test / post-test structure: an indicator of
behavior change among participants over the course
of the program (Level 3).
Leadership Practices Inventory: method
The LPI-Self asks respondents to rate (on a scale from
one to ten) the extent to which they typically engage
in thirty different behaviors. The LPI-Observer asks
those who have an opportunity to observe the
individual being rated to rate that person on the
same scale for the same thirty behaviors
Leadership Practices Inventory: results
Pedagogically useful.
Did not measure behavior change (ratings very high
at pre-test – little room for improvement).
Either no change took place, or the instrument was
not sensitive enough to measure change
Observers’ survey: outcomes
Evidence gathered from the observers was used to
provide some modest external validation of
participants’ self-reported behavior change and
impact.
There was overwhelmingly positive response to the
program from both participants and the observers
Basic results: reaction
We assembled a tremendous amount of data to
document that a great majority of participants were
satisfied with their experience of the program.
With very few exceptions, participants expressed
high levels of satisfaction with the overall program.
Specific activities within the program were not
always highly rated, and this led the ongoing
adaptation of the program to better suit its
participants.
Basic results: reaction
A large majority of participants believe they
developed their knowledge, skills and networks
through participating in the program.
Most were able to provide specific examples of their
learning and its application to their leadership
practices.
Evaluation of learning was largely restricted to self-
reports (no pre-tests / post-tests).
Basic results: behavior
Many CALL participants claimed to have changed
their leadership practices as a result of taking part.
Most were able to provide examples of specific
changes they have made, and of concrete ways in
which they have applied their learning from CALL to
agricultural leadership work.
These claims were supported by the modest survey
of peers that was conducted.
The LPI did not measure behavior change.
Basic results: impact
Many participants claimed that the CALL program
had an impact on their leadership practices, and as a
result on businesses and organizations of importance
to Canadian agriculture.
Many were able to articulate specific examples.
Given the tremendous difficulty of creating control-
group conditions, the attribution of behavior change
and impact to the specific interventions of the
program is very difficult
- it is hard to say if it was the program that made them better leaders or if they just matured
Key outcomes
Formative evaluation influenced the design of
learning activities during the program.
Summative evaluation gathered evidence for the
effectiveness of the program.
CALL was funded by the CFBMC for a second cohort.
Evaluation research
Expensive and time-consuming.
Kirkpatrick framework made sense to stakeholders.
The impossibility of isolating the impact of the
intervention from maturation and everything else
that happened over the 18 months of the program
(impracticality of control groups) meant that Levels 3
and 4 were difficult to evaluate truly
quasi experiment