THE GOAL BASE TYPE OF PROGRAM EVALUATION
Nesterenko A.
In the sphere of program evaluation
there are several methods, they are: for as-
sessments needs, accreditation, coast or bene-
fit, effectiveness, efficiency, and others. One
of the type is goal based. The aim of goal-
based evaluation is to investigate whether the
project has achieved its goals. This question is
posed at the end of the project process, often
within the context of a summative evaluation.
There are as many goals as there are
projects, let me provide general guidelines:
1. Formulate clear goals for your project
Only if goals are clearly defined without
ambiguity can their achievement can be veri-
fied.
2. Link to program goals
If your project is promoted within a
higher-level program consider the program
goal to which your project contributes. The
goals of your project should contribute as "in-
terim goals" to achieve "overall goals" defined
on the program level
3. How do you want to achieve your
goal?
Consider which measures of your pro-
ject contribute to achieving which goals
4. How will you recognize whether you
have achieved the goal?
Consider what suitable or observable
indicators of goal achievement are.
5. How will you provide the necessary
data?
Consider which method is suitable for
providing the necessary information.
The Instrument selection in most cases,
the instruments (e.g. questionnaires, check-
lists) should normally be adapted to your pro-
ject, by adding the name of your course into a
questionnaire or by erasing those questions,
which are not relevant to your context, or by
merging different questionnaires into one.
The “Goal Based Evaluation” (GBE) is
often called the “Goal Free Evaluation”(GFE)
as it was written in a Michael Scriven book
“Evaluation Thesaurus”. Goal Based Evalua-
tion is just what is says – the evaluation seeks
to determine if the stated goals (and objec-
tives) of the program or project have been
achieved. This is the typical evaluation with
which most of us are familiar. We have a list
of goals and objectives, and we design an
evaluation to see how well we did with each. I
hate to think of all the times I just rewrote the
objectives as questions for a survey!
Screven notes that the GBE approach
can be flawed by false assumptions underlying
the goals, changes in goals over time, and
dealing with inconsistencies in them. An ex-
ample of a false assumption comes from a So-
lar Energy workshop evaluation that showed
that less than a third of people who attended
incorporated solar energy into their homes.
The evaluation had failed to ask if, based on
the workshop information, participants had a
ЛИНГВОДИДАКТИКА
Вестник КАСУ
108
home that could be retro-fitted. Thus, the pro-
gram seemed to be more of a failure than it
was. Side effects and other consequences are
seldom addressed.
Goal Free Evaluation, according to
Screven, has the ‘purpose of finding out what
the program is actually doing without being
cued to what it is trying to do. That is, the
evaluator doesn’t know the purpose of the
program.’ If the program is doing what it is
supposed to be doing, according to Screven,
‘then these achievements should show up (in
observation of process and interviews with
consumers not staff).’
Screven says ‘that evaluators who do
not know what the program is supposed to be
doing look more thoroughly for what it is do-
ing.’ Of course, this makes it a challenge for
program staff to conduct the evaluation in a
goal free manner.
However, I think that you, as program
folks, can do GFE. It is a matter of how you
ask the questions. For example, you can ask,
“Since (date), what changes, if any, have you
made to your farming practices?” Then, fol-
low-up with, “What prompted you to make
that change(s)? Hopefully, the response will
be your program. And if the reply is “my
neighbor, or a magazine, or another program”,
well, you will have learned something useful.
Contrast those questions to: “As a result of
participating in ‘my program’, did you make
the following changes: xxxxx, yyy, zzzz…?”`
While GBE will continue as the main
direction in most evaluations, see if you can
find ways to ask goal free questions over the
course of your project or program.
An agency received funding to conduct
family-day care training for mothers receiving
public assistance and living in public housing.
No one had checked to see if family day care
as a business was allowed in public housing.
The evaluation showed that none of the par-
ticipants who completed the extensive training
started a family day care business (the stated
goal). Because the evaluation also asked what
happened as a result of the training, it was dis-
covered that two-thirds of the participants had
found work in child care settings, and all said
that their parenting skills were improving, nei-
ther of which were stated goals!
As for the preparation for designing
your training plan, the purpose of the design
phase is to identify the learning objectives that
together will achieve the overall goals identi-
fied during the needs assessment phase of sys-
tematic training design. You will also identify
the learning activities (or methods) you will
need to conduct to achieve your learning ob-
jectives and overall training goals.
Also, note that there is a document,
Complete Guidelines to design your training
plan, that condenses the guidelines from the
various topics about training plans to guide
you to develop a training plan. That document
also provides a Framework to design your
training plan that you can use to document the
various aspects of your plan.
Designing your learning objectives,
learning objectives specify the new knowl-
edge, skills and abilities that a learner should
accomplish from undertaking a learning ex-
perience, such as a course, webinar, and self-
study or group activity. Achievement of all of
the learning objectives should result in ac-
complishing all of the overall training goals of
the training and development experience(s).
The following depicts how learning ob-
jectives are associated with the training goals
(identified during the needs assessment
phase), learning methods/activities, and evi-
dence of learning and evaluation activities.
Training goal overall results or capabili-
ties you hope to attain by implementing your
training plan, pass supervisor qualification
test. Overall results or capabilities you hope to
attain by implementing your training plan:
1) exhibit required skills in problem
solving and decision making;
2) exhibit required skills in delegation
learning methods / activities what you will do
in order to achieve the learning objectives, e.g.
1. Complete a course in basic supervi-
sion
2. Address a major problem that in-
cludes making major decisions
3. Delegate to a certain employee for
one month
4. etc.
Documentation / evidence of learning
evidence produced during your learning activi-
ties -- these are results that someone can see,
hear, feel, read, smell, e.g.
1. Course grade
2. Your written evaluation of your prob-
lem solving and decision making approaches
ЛИНГВОДИДАКТИКА
Вестник КАСУ
109
3. etc.
Evaluation assessment and judgment on
quality of evidence in order to conclude
whether you achieved the learning objectives
or not.
To help learners understand how to de-
sign learning objectives, the following exam-
ples are offered to convey the nature of learn-
ing objectives. The examples are not meant to
be offered as examples to be adopted word-
for-word as learning objectives. Trainers
and/or learners should design their own learn-
ing objectives to meet their overall training
goals and to match their preferred strategies
for learning. The topic of the learning objec-
tive is included in bolding and italics. Learn-
ing objectives are numbered directly below.
Goals-based evaluation is a method
used to determine the actual outcome of a pro-
ject when compared to the goals of the origi-
nal plan. Performing a goals-based evaluation
helps a company to further develop successful
processes and either discard or reconfigure
unsuccessful ones. There are certain observa-
tions that are used to gauge a project when
using a goals-based evaluation that can help
the efficiency of a small business.
An understanding of how the goals were
established for a particular project is an impor-
tant part of a goals-based evaluation. Project
goals need to be grounded in research and use
historical data to be effective as performance
measuring tools. For example, a sales goal for
an upcoming marketing project may have used
two years of historical data. The goals-based
evaluation of the project may determine that
four years of historical data is a better way to
create sales projections.
Part of planning a project is establishing
a timeline for achieving goals. The timeline
includes milestones that are used as points
where the actual data is compared to projec-
tions. One of the observations made by a
goals-based evaluation is whether the timeline
was appropriate for the project and if the mile-
stones were placed effectively. For example,
data may indicate that a marketing promotion
that ran for six months saw a decline in reve-
nue after only three months. This data is used
to determine the schedule for future projects.
Projects are developed based on the list
of priorities that will help to achieve the final
goal. A goals-based evaluation will indicate if
those priorities were correct, or if any changes
need to be made for future projects. For exam-
ple, a marketing plan indicates that advertising
should be designed before contacting media
outlets for pricing. But the goals-based evalua-
tion of the project indicates that advertisers
can give a variety of prices that can save the
company money. The advertising should,
therefore, be developed after discussing pric-
ing options with advertisers.
To make a long story short let me say
that, often programs are established to meet
one or more specific goals. These goals are
often described in the original program plans.
Goal-based evaluations are evaluating
the extent to which programs are meeting pre-
determined goals or objectives. Questions to
ask yourself when designing an evaluation to
see if you reached your goals, are:
1. How were the program goals (and ob-
jectives, is applicable) established? Was the
process effective?
2. What is the status of the program's
progress toward achieving the goals?
3. Will the goals be achieved according
to the timelines specified in the program im-
plementation or operations plan? If not, then
why?
4. Do personnel have adequate re-
sources (money, equipment, facilities, train-
ing, etc.) to achieve the goals?
5. How should priorities be changed to
put more focus on achieving the goals? (De-
pending on the context, this question might be
viewed as a program management decision,
more than an evaluation question.)
6. How should timelines be changed (be
careful about making these changes - know
why efforts are behind schedule before time-
lines are changed)?
7. How should goals be changed (be
careful about making these changes - know
why efforts are not achieving the goals before
changing the goals)? Should any goals be
added or removed? Why?
8. How should goals be established in
the future?
The overall goal in selecting evaluation
method(s) is to get the most useful information
to key decision makers in the most cost-
effective and realistic fashion. Consider the
following questions:
1. What information is needed to make
ЛИНГВОДИДАКТИКА
Вестник КАСУ
110
current decisions about a product or program?
2. Of this information, how much can
be collected and analyzed in a low-cost and
practical manner, e.g., using questionnaires,
surveys and checklists?
3. How accurate will the information be
(reference the above table for disadvantages of
methods)?
4. Will the methods get all of the
needed information?
5. What additional methods should and
could be used if additional information is
needed?
6. Will the information appear as credi-
ble to decision makers, e.g., to funders or top
management?
7. Will the nature of the audience con-
form to the methods, e.g., will they fill out
questionnaires carefully, engage in interviews
or focus groups, let you examine their docu-
mentations, etc.?
8. Who can administer the methods now
or is training required?
9. How can the information be ana-
lyzed?
Note that, ideally, the evaluator uses a
combination of methods, for example, a ques-
tionnaire to quickly collect a great deal of in-
formation from a lot of people, and then inter-
views to get more in-depth information from
certain respondents to the questionnaires. Per-
haps case studies could then be used for more
in-depth analysis of unique and notable cases,
e.g., those who benefited or not from the pro-
gram, those who quit the program.
There are four levels of evaluation in-
formation that can be gathered from clients,
including getting their:
1. Reactions and feelings (feelings are
often poor indicators that your service made
lasting impact);
2. Learning (enhanced attitudes, percep-
tions or knowledge);
3. Changes in skills (applied the learn-
ing to enhance behaviors);
4. Effectiveness (improved performance
because of enhanced behaviors).
Usually, the farther your evaluation in-
formation gets down the list, the more useful
is your evaluation. Unfortunately, it is quite
difficult to reliably get information about ef-
fectiveness. Still, information about learning
and skills is quite useful.
Ideally, management decides what the
evaluation goals should be. Then an evaluation
expert helps the organization to determine
what the evaluation methods should be, and
how the resulting data will be analyzed and
reported back to the organization. Most or-
ganizations do not have the resources to carry
out the ideal evaluation.
Still, they can do the 20% of effort
needed to generate 80% of what they need to
know to make a decision about a program. If
they can afford any outside help at all, it
should be for identifying the appropriate
evaluation methods and how the data can be
collected. The organization might find a less
expensive resource to apply the methods, e.g.,
conduct interviews, send out and analyze re-
sults of questionnaires, etc.
If no outside help can be obtained, the
organization can still learn a great deal by ap-
plying the methods and analyzing results
themselves. However, there is a strong chance
that data about the strengths and weaknesses
of a program will not be interpreted fairly if
the data are analyzed by the people responsi-
ble for ensuring the program is a good one.
Program managers will be "policing" them-
selves. This caution is not to fault program
managers, but to recognize the strong biases
inherent in trying to objectively look at and
publicly (at least within the organization) re-
port about their programs. Therefore, if at all
possible, have someone other than the pro-
gram managers look at and determine evalua-
tion results.
Develop an evaluation plan to ensure
your program evaluations are carried out effi-
ciently in the future. Note that bankers or fun-
ders may want or benefit from a copy of this
plan. Ensure your evaluation plan is docu-
mented so you can regularly and efficiently
carry out your evaluation activities. Record
enough information in the plan so that some-
one outside of the organization can understand
what you're evaluating and how. Consider the
following format for your report:
1. Title Page (name of the organization
that is being, or has a product/service/program
that is being, evaluated; date)
2. Table of Contents
3. Executive Summary (one-page, con-
cise overview of findings and recommenda-
tions)
ЛИНГВОДИДАКТИКА
Вестник КАСУ
111
4. Purpose of the Report (what type of
evaluation(s) was conducted, what decisions
are being aided by the findings of the evalua-
tion, who is making the decision, etc.)
5. Background About Organization and
Product/ Service/ Program that is being evalu-
ated
a) Organization Description/History
b) Product/Service/Program Description
(that is being evaluated)
I) Problem Statement (in the case of
nonprofits, description of the community need
that
is
being
met
by
the
prod-
uct/service/program)
II)
Overall
Goal(s)
of
Prod-
uct/Service/Program
III) Outcomes (or client/customer im-
pacts) and Performance Measures (that can be
measured as indicators toward the outcomes)
IV) Activities/Technologies of the
Product/Service/Program (general description
of how the product/service/program is devel-
oped and delivered)
V) Staffing (description of the number
of personnel and roles in the organization that
are relevant to developing and delivering the
product/service/program)
6) Overall Evaluation Goals (eg, what
questions are being answered by the evalua-
tion)
7) Methodology
a) Types of data/information that were
collected
b) How data/information were collected
(what instruments were used, etc.)
c) How data/information were analyzed
d) Limitations of the evaluation (eg,
cautions about findings/conclusions and how
to use the findings/conclusions, etc.)
8) Interpretations and Conclusions
(from analysis of the data/information)
9) Recommendations (regarding the de-
cisions that must be made about the prod-
uct/service/program)
Appendices: content of the appendices
depends on the goals of the evaluation report,
eg.:
a)
Instruments
used
to
collect
data/information
b) Data, eg, in tabular format, etc.
c) Testimonials, comments made by us-
ers of the product/service/program
d) Case studies of users of the prod-
uct/service/program
e) Any related literature
There are some tips to avoid in evaluat-
ing a program:
1. Don't balk at evaluation because it
seems far too "scientific." It's not. Usually the
first 20% of effort will generate the first 80%
of the plan, and this is far better than nothing.
2. There is no "perfect" evaluation de-
sign. Don't worry about the plan being perfect.
It's far more important to do something, than
to wait until every last detail has been tested.
3. Work hard to include some inter-
views in your evaluation methods. Question-
naires don't capture "the story," and the story
is usually the most powerful depiction of the
benefits of your services.
4. Don't interview just the successes.
You'll learn a great deal about the program by
understanding its failures, dropouts, etc.
5. Don't throw away evaluation results
once a report has been generated. Results don't
take up much room, and they can provide pre-
cious information later when trying to under-
stand changes in the program.
As a professional you may chose any
type of program evaluation, but the main point
of any program is to achieve its goals. So the
article and the information about goal type of
program evaluation can help people to im-
prove the benefit of every program.
REFERENCES
1. Barrett, F., Fry, R. (2002). Appreciative
inquiry in action: The unfolding of a pro-
vocative invitation. In R. Fry, F. Barrett, J.
Seiling, D. Whitney (Eds.), Appreciative
inquiry and organizational transformation:
Reports from the field. Westport, CT: Quo-
rum Books.
2. Brinkerhoff, R. O. (2003). The success case
method: Find out quickly what’s working
and what’s not. Berrett-Koehler Publishers.
3. Kibel, B. M. (1999). Success stories as hard
data: An introduction to results mapping.
New York: Kluwer/ Plenum.
4. Costantino, R. D.,Greene, J. C. (2003). Re-
flections on the use of narrative in evalua-
tion. American Journal of Evaluation, 24(1)
5. Dart, J., Davies, R. (2003). A dialogical,
story-based evaluation tool: The most sig-
nificant change technique. American Jour-
nal of Evaluation, 24(2), 137–155.
Поделитесь с Вашими друзьями: |