How can designing and carrying out an evaluation benefit my organization and me?
Many people are skeptical about investing time, money, and energy into designing and carrying out an evaluation of their program, and perhaps with good reason. After all, their purpose is to get things done, not to do studies. They may also have had some experience with evaluations in the past that were not very useful. Perhaps they carried out the evaluation only because someone (e.g., the funder) told them it had to be done.
Keep in mind that, while you may have had negative prior experiences with evaluation, an evaluation that is well done can be very helpful.
How an Effective Evaluation Can Help You and Your Organization
It can reaffirm what you and your organization are doing well and, in so doing, give you further confidence to move ahead.
It can help you to learn from past experience in such a way that you can make mid-course adjustments to further improve your program.
It can help you and your organization make some strategic decisions about the future of your program (e.g., Should it be continued or not? Should you try some new strategies?).
It can reveal benefits of your program that you may not have detected and which, in turn, may open up new opportunities or help you in designing future programs.
It can provide you and your organization with something to offer others: potential funders, to justify future funding; potential clients, to promote demand for your services; others working in similar areas, to share your experiences.
However, there are also pitfalls to a poorly designed and executed evaluation:
It can be a waste of time and money and just another bureaucratic hoop to jump through.
It can hurt the image of your program.
It can be personally disempowering and ultimately a threat to you and your staff.
When and under what circumstances should I carry out an evaluation of my program?
People do not often ask this very important question when they plan an evaluation. Too frequently an evaluation is carried out because someone says "it's time" (e.g., "We've been operating for two years and we need to do an evaluation"; "It's the end of our program and we should evaluate it"; "Our funder says we must evaluate our program").
In fact, an evaluation should be carried out only when the following conditions are met:
One or more key decisions must be made and the evaluation will yield important inputs to help make that decision(s).
You can do something with the results you get (i.e., you are in a position to make changes based on the recommendations coming out of the evaluation).
Principal audiences, stakeholders, and decision-makers support the evaluation and want to receive and use the results.
Under these circumstances by all means take the time and effort at the beginning to design a good evaluation that will meet your needs.
This is when you do not want to invest time and effort into doing an evaluation:
You and your staff have little interest in evaluating your program. You know it is going well and don't believe an evaluation can tell you anything you don't already know.
There is neither pressure nor a burning need to do an evaluation (e.g., to obtain further funding).
Even if you get some valuable evaluation results, the circumstances don't permit you to act on them (e.g., your board has already made up its mind to take a course of action and evaluation results will not influence their decision).
Principal audiences, stakeholders, and decision-makers are not interested in the idea of an evaluation or committed to using its results.
Under these circumstances, undertaking an evaluation runs the risk of being an expensive, time-consuming exercise in futility.
How do I get started?
If you have satisfactorialy answered the first two questions (i.e., you have identified a real need for evaluation and you are certain that the results will be used for decision-making purposes in your organization), then you are ready to get started.
The next step is to invest the necessary time and effort to answer the following questions thoughtfully and thoroughly:
A. Who is the audience for the evaluation?
In other words, who will be interested in reading the evaluation when it is done and who will implement its recommendations? Depending on who you are, what your organization is doing, and what decisions need to be made, you can have one or multiple audiences, among them:
People designing and implementing programs;
The Executive Director and/or Board of your organization;
The agency funding your program;
Clients and other groups you are trying to reach.
Once you have identified these different audiences, it is critical to take the time at the very beginning of the process to get everyone involved to identify what they want from the results of the evaluation. This step is often difficult, as these are busy people involved in many activities.
For this first step it is incumbent upon whoever is responsible for your evaluation to spend time with your audience(s), either individually or in small groups, to help them think through (1) what questions they might have that the evaluation can help to answer and (2) how, once they obtain the answers to these questions, they will use this information in their decision making. If you fail to take this time at the beginning to ensure their involvement and commitment, you run the risk of producing an evaluation that your key audience(s) will ignore when they come to make the very decisions this evaluation was intended to facilitate.
B. When will your audience(s) need the information from the evaluation?
It is very important to determine if the evaluation is to be used to make a specific decision. For example, if your audience is going to make a major decision in four months, then they need the evaluation report in time to use it in making this decision. Designing and carrying out an evaluation that reaches decision-makers one month after their decision has been made is pointless. The decision will already be made and the evaluation (including all your time, effort and the resources that went in to carrying it out) will be "history."
C. In what format should the evaluation be delivered?
You want to present the evaluation results in a way that is useful to your audience(s). Too often evaluations are prepared with very nice binders and lots of impressive charts and graphs, but readers can't easily find the crucial information they are looking for. And even when they do, they often still cannot find the answers to their questions. Such evaluations often end up unread, gathering dust on a bookshelf or simply thrown way, along with all the time and money spent to create them.
To avoid this waste and frustration, make sure you package the results in a way that is easy to read, attractive, and responsive to the needs of the audience. Often an introductory three-to-five-page summary, primarily written with bullets, is enough. Sometimes charts and graphs are helpful.
If you want your audience to use the outcomes of the evaluation, you must take the time to determine how best to deliver these outcomes to them in a form they will read and use. Use the personal interactions described in point A above to get ideas from your audience about a "user-friendly" report.
D. How much money is available to carry out the evaluation?
Before going any further in planning your evaluation, be realistic and determine what your budget is for the evaluation. Sometimes, especially if you have a grant from an outside funder that requires one or more evaluations as part of the grant, you have some funds set aside for the evaluation. If not, then you either need to design an evaluation that can be done in-house by staff with possible volunteer support, or you need to look for funding for the evaluation.
The important thing is to be realistic: don't design an ideal evaluation that you can't afford. If you do, you are stopped before you start. If you don't have the necessary money but you REALLY need that evaluation, then strategize with your key audiences (especially if they are the funders and/or your board) on ways to get the funding you need. If possible, ask them to help you.
E. Who will carry out the evaluation?
You have several choices.
You could do the evaluation in-house. If what you need is relatively straightforward (e.g., a review and update of your materials and/or your training course) and you have someone on your staff with the capabilities, you may be able to do this internally. You might also consult some evaluation resources and/or talk to people outside your organization who have extensive experience in evaluation.
You could contract it out to an evaluation specialist. If what you need goes beyond your in-house expertise, then you will probably need to hire an outside evaluation expert who can do the evaluation for you.
You could make the evaluation a joint effort, bringing in an outside specialist to help you and your staff design and carry out the evaluation. See Question 6 on page 144 for some guidelines in choosing an external evaluator.
For some examples of different scenarios in terms of audiences, timing for the evaluation report, substance of report, funding, and who does the evaluation, see Part VI, "Three Evaluation Scenarios," p. 150.
F. How elaborate must the evaluation be?
Some people think that, in order to be credible, evaluations must be "scientifically" carried out: one needs to do a pre-test and a post-test, have an experimental group that participated in the human rights education program and a control group that didn't, and be able to come up with statistical comparisons that show a significant difference between the experimental group and the control group.
While this has long been the standard method of evaluation, an increasing volume of literature argues that this approachómore characteristic of academic researchódoesn't always work in the "real world," especially for programs that deal with on-going social problems. More important is sizing up your audiences, deciding at the beginning what kind of information they are going to need to make a decision, and giving them the information in time to affect decisions. More often than not, the decision they need to make will not require an elaborate evaluation. They need only answers to a few rather straightforward questions.
Michael Quinn Patton, an experienced program evaluator and author of Utilization Focused Evaluation, a widely used book on evaluation, explains this alternative method succinctly:
What are some of the more commonly asked evaluation questions?
Some people are under the mistaken impression that there is a one-size-fits-all blueprint for carrying out evaluations. Unfortunately, this is not the case. The questions and methodologies will vary tremendously, based on the initial concerns that lead you to conduct an evaluation and your plans for using the results.
To guide you in this process, some of the more commonly asked evaluation questions are listed below. Which you use and what other questions you add will depend on your audiences and the information they need from the evaluation to make decisions.
Commonly Asked Evaluation Questions
Has the project achieved its objectives (e.g., successfully developed a new human rights education curriculum and materials, successfully trained staff in the curriculum and materials, successfully delivered courses)? If not, why?
Were the required resources for the program clearly defined (e.g., technical assistance, purchase of materials) and appropriate? If not, why? What actions were taken to address problems that might have arisen?
How well was the project managed? If management problems arose, what actions were taken to address them?
Did project activities take place on schedule (e.g., development of materials, design of curriculum, design and/or delivery of courses, radio/TV spots)? If there were delays, what caused them? What actions were taken to correct them?
Did the project have the desired impact (e.g., did it result in changes in knowledge, attitudes, and practices of teachers and/or students in the human rights arena)? If not, why? Did the project have any unintended impacts?
Is the project replicable and/or sustainable? Was it cost effective?
What were the lessons learned? For others who might want to reproduce or adapt your project? If you want to expand this project to other sites?
What are some tools for answering your evaluation questions?
An evaluator, like any expert, should have a "Tool Kit" containing a wide variety of evaluation tools ranging from highly quantitative to highly qualitative. The trick is to decide which tools are most appropriate, given the questions asked and the audience's information needs.
Below is a list of commonly used evaluation tools. Following each, for illustrative purposes, are circumstances in which you might want to make use of that tool.
Some of the More Commonly Used Evaluation Tools and When to Use Them
A. Structured questionnaires and interviews
When to use:
When you have specific information you want to obtain and know what your questions are (e.g., you want to get feedback on what trainees thought of a training course; you want to find out how participants used what they learned).
B. Interviews (semi-structured, open-ended)
When to use:
When you want to get at how the program has impacted an individual in terms of changes in attitudes and self-perception (e.g., participant in a human rights training program; someone the trainee has in turn trained). Semi-structured and open-ended interviews are especially useful in this context.
When you want to identify unintended results that you may not have anticipated and thus looked for in a structured interview or questionnaire (e.g., personal impacts, what participants have done with the training).
When to use:
At the beginning and end of a training course to assess what participants have learned or measure changes in attitudes.
When to use:
In a classroom to see if the teacher trained is appropriately integrating human rights into teaching and classroom management practices.
When a human rights training program is being piloted to assess how participants are reacting and interacting or to what extent participants understand and use the methodology and materials.
With a pre-established observation checklist to make sure you are observing aspects of specific interest.
E. Case studies
When to use:
You want to see what people do with the training within the context they are working and you want the flexibility to follow trails and/or examine the individual or program you are assessing within a broader cultural context.
F. Group interviews or focus groups
When to use:
When you lack the time and/or resources to make individual interviews
When you want to obtain some information that would be enriched by having the people to be interviewed interacting with and listening to one another (e.g., What did participants think of the training program they attended? How did they use what they learned? How did it impact them and their communities?).
When you want to enrich the data obtained through individual interviews or to test out results from individual interviews with a larger group of individuals to see if you obtain similar responses.
G. Project records
When to use:
When you want to collect basic information that is already available in project records (e.g., how many people were trained and what their characteristics were, when the training took place, how many and what types of materials were distributed, how much they cost).
When, how, and in what combination these tool are to be used depends on the questions you are asking, as well as the time and resources available to carry out the evaluation.
For some example of different evaluation scenarios and combinations of tools to be used, see Part VI, "Three Evaluation Scenarios," p. 150. For further information on evaluation tools, please see Evaluation in the Human Rights Education Field: Getting Started by Felisa Tibbitts.2
How do we go about selecting an evaluator?
You have decided you need an external evaluator. You have identified your audiences, you have received their endorsement of the evaluation, and with them you have begun to identify the key questions and the uses for the evaluation. How do you find an evaluator who suits your needs?
The first thing you need to do is develop a profile of the kind of skills your evaluator should have:
Is this evaluation going to require someone with strong quantitative skills (e.g., are you going to have to select a random sample and when you have the data, do statistical tabulations)? Or given the nature of your questions, are you looking for someone with strong qualitative skills? Perhaps you need someone with both.
How important is it that the person has extensive knowledge and/or background in human rights generally and specifically in the program being evaluated?
How closely will you want the evaluator to work with you and your staff (e.g., do you want to have him or her work independently or as a member of a team that includes some in-house staff)?
Do you have a clear idea of the evaluation design and just want someone to implement it? Or are you looking for an evaluator who will help you and your audiences fine-tune the evaluation questions and suggest the most appropriate methodology for answering those questions?
Do you want to hand the job to the evaluator and let him or her "run with it," or do you want to be directly involved throughout the process? If the latter, what does this mean for the kind of person you want to bring in as an evaluator?
Once you have the profile of your evaluator in mind and understand the type of involvement you want in the design and conduct of the evaluation, the next step is to reach out to other organizations in your community that have recently carried out evaluations. They do not necessarily need to be involved in human rights education. Ask them about their experiences with the evaluators. These inquiries may generate names of people whom you would like to interview and get references.
Another way to identify evaluators is to contact organizations in your area that specialize in educational evaluation. If you can describe the kind of evaluator you are seeking, they may be able to come up with some names.
Most important is selecting someone with whom you are comfortable, and not just because he or she has the requisite evaluation skills. Ultimately you need to select someone that you feel will listen to you, someone who will attempt to accommodate your needs, and if you decide to combine the efforts of an outside evaluator with people working in-house, someone who is a real team player.
What are some special challenges/opportunities for designing, implementing, and using evaluations in human rights?
Unlike objective subjects like mathematics and science, human rights education necessarily seeks to impart more than knowledge and tools that can be used to apply that knowledge. It also involves addressing core human rights for respect, dignity, and tolerance, as well as recognizing that while we are all different, we are equal. Human rights education further requires that these lessons be learned not only intellectually, but also personally, taking action to live them in our classrooms, homes, and communities. These are the values that underlie the Universal Declaration of Human Rights.
Furthermore, education for human rights means working with individuals, both neophytes and seasoned activists, who come from different backgrounds and life experiences and who may be applying what they learn in human rights in different ways, depending on the community needs and their particular interests.
Measuring whether these concepts are well understood and applied, especially when evaluating impacts, raises a number of difficult questions: How can you ascertain that people are treating each other with respect as a result of a course in human rights? How can you determine whether participants in a course have increased in their feelings of self-worth? Or their understanding of what to do when their rights are violated?
While "standard" evaluation methodologies such as surveys, questionnaires, and tests of knowledge are particularly good for seeing what people have learned. You may find that you need to exercise more creativity if a key objective of your evaluation is to see whether, as a result of a human rights education program, there have been changes in people's lives. In such cases, more qualitative instruments, like case studies and open-ended interview observations, might be appropriate.
How do I use evaluation results in making DECISIONS about my program?
Finally, you should keep in mind that data from evaluations is typically only one of several sources of information used by an organization in making decisions. There will inevitably be some considerations of a "political" nature (e.g., how will influential people in the community receive the decisions to be taken with the evaluation data? Will taking this decision unnecessarily alienate some members of the community?). And inevitably certain individuals in the audience will have their pre-conceived notions challenged and be made to feel insecure.
However, if the evaluation touches on the right questions, if the information has been packaged in a way that decision makers can readily use it to make their decisions, and if the audience has "bought into" the evaluation from the start, the chances are good that the evaluation will be put to good use.
Once you have completed the evaluation, the challenge is to bring the information to the attention of the decision-makers when they make the decisions. There is no substitute for being in the right place, at the right time, with the right information (e.g., figuring out when the decision(s) that the evaluation is to inform are going to be made, getting the evaluation report into the hands of the right people so that they can actually use it). An ideal way of accomplishing this task is to get a briefing on the evaluation results on the agenda in the meeting where the decision is to be made and making sure that the decision makers have a packet that summarizes the results and their implications for the decision(s) to be taken before the meeting.
In other words, having completed the evaluation, you are now the advocate bringing to your decision-makers the information that you want them to use in making their decision. But keep in mind that in reality evaluation results are often only one important source of information among several that will be used in making a given decision.
1 Michael Quinn Patton, Utilization Focused Evaluations
(Thousand Oaks, California: Sage Publications, 1997) 2 2 3 4 - 5 .