The Tool Box needs your help
to remain available.
Your contribution can help change lives.
Donate now.
Seeking supports for evaluation?
Learn more.
Learn how to decide exactly what to evaluate and steps you'll need to take to design, implement, and use the evaluation. |
Chapters 36-39 of the Community Tool Box concern the evaluation of community programs. We've chosen to devote so much space to evaluation because it's one of the most important parts of any effort to improve community life and bring about lasting social change. It can help you to better understand how well the planning and preparations for your program went, whether you implemented it as you meant to, and what the consequences were. It can tell you whether you met your original objectives and goals or not, and give you information about what you need to change to be more effective.
Chapter 36 explains how an evaluation works, and gives some guidance as to how to develop it. Chapter 38 will assist you in gathering and analyzing the information you want, and Chapter 39 deals with how to use that information to improve your program and garner community and funding support. Here, in Chapter 37, we look at evaluation from a research point of view - how to plan and structure an evaluation that can help you better understand and improve what you do.
In this section, we'll discuss the first, and perhaps most important, step in evaluation research: deciding exactly what to evaluate. Each of the rest of the sections in the chapter will deal in detail with one of the steps you'll need to take to design, implement, and use the evaluation. The goal of the chapter is to provide guidelines that are useful to grassroots or community-based organizations as well as students or academic researchers.
Every evaluation, like any other research, starts with one or more questions. Sometimes, the questions are simple and easy to answer. (Will we serve something close to the 50 people we expect to?) Often, however, the questions can be complex, and the answers less easy to find. (Which, or which combination, of the three parts of our intervention will affect which of the two behavior changes we seek within participants?) The questions you ask will guide not only your evaluation, but your program as well. By your choice of questions, you're defining what it is you're trying to change.
You choose your evaluation questions by analyzing the community problem or issue you're addressing, and deciding how you want to affect it. Why do you want to ask this particular question in relation to your evaluation? What is it about the issue that is the most pressing to change? What indicators will tell you whether that change is taking place? Is that all you're concerned with? The answer to each of these and other questions helps to define what it is you're trying to do, and, by extension, how you'll try to do it.
For example, what's the real goal of a program to introduce healthier foods in school lunches? It could be simply to convince children to eat more fruits, vegetables, and whole grains. It could be to get them to eat less junk food. It could be to encourage weight loss in kids who are overweight or obese. It could be to educate them about healthy eating, and to persuade them to be more adventurous eaters.
The evaluation questions you ask both reflect and determine your goals for the program. If you don't measure weight loss, for instance, then clearly that's not what you're aiming for. If you only look at an increase in children's consumption of healthy foods, you're ignoring the fact that if they don't cut down on something else (junk food, for instance), they'll simply gain weight. Is that still better than not eating the healthy foods? You answer that question by what you choose to examine - if it is better, you may not care what else the children are eating; if it's not, then you will care.
Academics and other researchers may approach choosing research questions differently from those involved in community programs. In addition to their practical and social applications, they may choose problems to research simply because they are interesting, or because they tie into other work that they or their colleagues are doing. Community service workers and others directly involved in programs, on the other hand, are concerned specifically with improving what they're doing so they can help to enhance the quality of life for the participants in their programs, and often for the community as a whole. Since we assume that most people using this chapter of the Tool Box are likely to be practitioners in the community, let's look at some of the reasons they might pick a particular area to evaluate.
If you're running, or about to run, a program to affect a community issue or problem, you might want to know one or more of the following:
Is there a cause-and-effect relationship (i.e., does one action or condition directly cause another) between a particular action and a particular change? Usually, you'll be concerned with this in terms of your program. (Does our smoking-cessation support group help members to quit smoking?) Sometimes, however, it might be important to look at it in terms of the community. (Does a smoking ban in public buildings, bars, and restaurants lead to a decrease in the number of community residents who smoke?)
If we try this new method, what will happen?
Will the program that worked in the next town, or the one that we read about in a professional journal, work with our population, or with our issue?
Some of the same differences between the concerns of researchers and the concerns of practitioners may hold here. Those interested primarily in research may simply be moved by curiosity or by the urge to solve a difficult problem. As a practitioner, on the other hand, you'll want to know the effects of what you're doing on the lives of participants or the community.
Your interest, therefore, might grow from:
Your interest as a community worker has to be considered in relation to your evaluation and the purpose of your program. Your basic intent is probably to improve things for the population or the community, but in what ways and by what means? Are you trying out some new things in the hope of making an already-successful program more successful? Are you importing a promising practice to see if it works with your population? Are you trying to solve a particularly difficult professional problem?
A community mediation program found that it was having little success in cases involving adolescents. After conferring with other similar programs - all of which were struggling with the same issue - mediators in the program devised a number of strategies to try to reach youth. The overall question they were concerned with - "Will these strategies make it possible to mediate successfully where teens are involved?" - was one with real consequences.
Media reports about or community attempts to address the issue are clear indicators that it is socially important. If it affects a particular group - violence in a given neighborhood, a high rate of heart disease among middle-aged Black males - it has an obvious impact on the community and society. If your program or intervention has the potential to help resolve the issue in other places, to be used by community workers in other fields, or to be applied in a number of ways, the importance of your analysis increases even further. If addressing the issue can lead to long-term positive social change, then the analysis is vitally important.
All of this affects your evaluation and the questions you ask. If the issue is one of social importance, then your evaluation of your work is socially important as well. Are you addressing the aspects of your program or intervention that are of the greatest value to participants, the community, and society? If not, how might you begin to do so?
The real question here is not whether the issue is important to the field - if it's important to the community, that's what matters. However, you should explore whether there's evidence from the field to apply to the issue. Is what you're doing likely to be more effective than other approaches that have been tried? If your approach isn't effective, are there other approaches out there that hold more promise? Can the published material about the issue help you understand it better, and give you better ideas about how to address it?
Consider whether there is evidence that the issue occurs with a variety of populations and under a range of conditions. Also consider whether the observations or methods used to determine the issue's existence are accurate and whether they can be used in different situations and with different groups. Your evaluation may give you valuable information to pass on to practitioners in different fields or different circumstances.
If evaluation shows that your program or intervention is successful, that's obviously valuable information, especially if what you're evaluating is innovative and hasn't been tried before. Even if the evaluation turns up major problems with the intervention, that's still important information for others - it tells them what won't work, or what barriers have to be overcome in order to make it work.
Some of those who might use your results include individuals and groups affected by the issue; service providers and others who have to deal with the problem (in the case of youth violence, for instance, this last group might include police, school officials, small business owners, parents, and medical personnel, among others); advocates and community activists; and public officials and other policy makers.
Who has to change in order to address the issue? The focus of the intervention will tell you whom the evaluation should focus on.
Some possibilities:
You know why you're running your program. Evaluating it should just be a matter of deciding whether things are better when you evaluate than they were before you started, right? Well, actually. wrong. It's not that simple. First of all, you need to determine what "things" you are actually looking at (remember the school lunch example?) Second, you will need to consider how you will determine what you're doing right, and what you need to change. Here's a partial list of reasons why choosing questions beforehand is important.
Evaluation questions, since they help shape your work, should be chosen and the evaluation planned when planning the overall program or effort. That gives you time and room for a participatory process, and gives you the chance to use the evaluation as an integral part of the program. As the program unfolds, you might find yourself adjusting or adding questions to reflect the reality of what is happening, but unless your original questions were misguided (you were wrong about what behavior had to change in order to produce certain results, for instance), they should serve you well.
Now let's discuss reality for many community based and grassroots programs. They're often understaffed and underfunded. Staff members may be underpaid, and may often work many more hours a week than they're paid for, because of their dedication to social justice and social change. Most or all program staff may even be volunteers, with full-time jobs and family responsibilities aside from their work in the program. Initial evaluation in these circumstances is often anecdotal - i.e., based on participants' comments and stories about their progress and staff members' personal, informal observations. A formal evaluation will probably wait until there's funding for it, or until someone has the time to coordinate or take charge of it.
In that case, the "when" becomes "as soon as you can." You may be dealing with a program that has just started, or with one that's been operating for a long time. You may know that changes need to be made, or it may seem that the program is in fact meeting its goals. Whatever the situation, evaluation questions need to be chosen, and an evaluation planned that will give you the information you need to improve your work. Even with a program that's been going on for a while, the questions can still help you define or redefine your work, and will certainly help you improve it over the long term.
If you've consulted other sections of the Tool Box concerned with evaluation, you probably know that we advocate that all stakeholders be involved in planning the evaluation. We believe that the best evaluation is participatory. That means that there is representation of the views and knowledge of people affected by the issue to be addressed. The list of potential participants is essentially the same as that under "Whose problem is it?" in the first part of this section: those directly affected and their close contacts; those who work with those directly affected, or who deal directly or indirectly with them and the issue; and public officials. To these groups, we might add other concerned citizens, and those indirectly affected by the issue. (A shop owner may not be a victim of neighborhood violence, but fear of that violence might nonetheless keep customers away from his shop, for instance.)
Evaluations that involve all stakeholders have a number of advantages over those conducted in a vacuum by outside evaluators or agency or program staff. They're more likely to reflect the real needs of the community, and they bring to bear the community's knowledge of its own context - history, relationships, culture, etc. - without which a program and its evaluation can go astray.
Participation can range from simple consultation before the fact to complete involvement in every aspect of an evaluation - assessment, planning, data gathering, analysis, and passing on the information. In general, the greater the involvement of stakeholders, the better, but in-depth involvement of the stakeholders may not always be possible. There are time disadvantages to participatory evaluation - it takes longer - and there are logistical concerns, as well. Participants may have nothing in their backgrounds to prepare them for research, so training in a number of areas may be necessary, requiring skill, careful planning, and yet more time. The level of participation your evaluation can sustain, therefore, relies to some extent on your time constraints and your capacity to train and support participants.
Choosing questions
When you choose evaluation questions, you're really choosing a research problem - what you want to examine with your research. (Evaluation, whether formal or informal, is in fact research.) You have to analyze the issue and your program, consider various ways they can be looked at, and choose the one(s) that most nearly tell you what you want to know about what you're doing. Are you just trying to determine whether you're reaching the right people in sufficient numbers with your program? Do you want to know how well an intervention is working with specific populations? What kinds of behavior changes, if any, are taking place as a result? What the actual outcomes are for the community? Each of these - as well as each of the many other things you might want to know - implies a different set of evaluation questions. To find the questions that best suit your evaluation, there is a series of steps you can follow.
Describe the issue or problem you're addressing
A problem is a difference between some ideal condition (all people 10 years of age or older should be able to read; people should be able to find a decent job) and some actual condition in the community or society (a 25% illiteracy rate among those attending a particular high school; 50% unemployment among minority youths in a particular city). This may mean the absence of some positive factor (qualified teachers and adequate educational facilities; entry-level jobs that are reachable from minority neighborhoods) or the presence of some negative factor (students' difficulty with English; discrimination against minority job applicants), or some combination of these.
To describe the issue or problem:
Describe the importance of the problem
To be sure that this is a problem you really should be addressing, consider its importance to those affected and to the community.
You might also ask whether the effects of the problem matter to society, but in fact, that shouldn't make a difference. If they matter to the people who experience them, they're important. Society doesn't always consider a problem important if it's only a problem for a minority, or for a group that's generally ignored (the poor, the homeless).
In light of these factors, decide whether the problem is important to the evaluation.
Describe those who contribute to the problem
Whose behavior, by its presence or absence, contributes to the problem? Are they in the program participants' personal environment (participants themselves, family, friends), service environment (teachers, police), or broader environment (policymakers, media, general public)? For each of them, consider the types of behavior that, by their presence or absence, contribute to the discrepancy that constitutes the problem.
Assess the importance and feasibility of changing those behaviors
Describe the change objective
Based on the above analysis, choose behavior changes to target in specific people. Where you can, specify the desired levels of change in targeted behaviors and outcomes (those changes in conditions that should occur if the problem were to be solved).
For example, a behavior change goal might be an increase in pre-employment capacity - self-presentation, job-seeking, interview skills, interpersonal competence, resume writing, basic skills, etc. - for minority job seekers aged 18-24. Or you might instead or in addition target policy makers, with the goal of having them offer tax incentives to businesses that locate in or close to minority communities.
This is a way of defining your work. If you're planning the evaluation as you plan the program - as you would in the ideal situation - then the questions you're asking the evaluation to examine reflect the problems you're trying to solve, and this kind of analysis is important. If you're starting an evaluation of a program that has been in place for some time, then you're going to have to do some figuring after the fact about what consequences you think (hope) the program is having, and what they will lead to. You may be talking about changes in specific participant behaviors, about behaviors that act as indicators of other changes, or about results of another sort (participants gaining employment, for instance, which may have a direct relationship to participant behavior or may have more to do with local economic conditions).
Make sure that the expected changes would constitute a solution or substantial contribution to the problem
If you conclude that they would not result in a substantial contribution, revise your choice of problem and/or your selection of targeted people and actions as necessary. If you think that what you're looking at in an evaluation doesn't address the problem, then you should be looking at something else. If the objectives you've chosen do constitute all or a substantial part of a solution, you've found your questions.
Now that you've chosen your questions, there may be other factors to consider, such as the settings in which the evaluation will be conducted. If your program is relatively small and/or has only one site, this wouldn't be an issue. However, if you don't have the resources - whether finances, time, or personnel - to evaluate the whole program.
There are some situations in which the choice setting may be important:
Multiple sites
Can present a challenge for an evaluation, because, although every effort may be made to make the program at all sites exactly the same, it will seldom be so. If the program relies on human interaction - teacher/learner, counselor/counselee, trainer/trainee, doctor/patient, etc. - there will be differences from site to site depending on the people staffing each. (The exception is when the same people staff all sites, providing the same services at each site at different times or on different days.) Even if all are equally competent, no two staff members or teams will do things in exactly the same way or relate to participants in exactly the same way, and the differences can be reflected in differences in outcomes. If methods or other factors vary from site to site, that will further complicate the situation.
Furthermore, the physical character of a site can influence not only program effectiveness, but also the recruitment of participants and whether or not they remain in the program long enough for it to have some effect (often called "retention.") The site's layout, comfort, apparent safety and security, and - often most important - how easy it is to get to, all affect whether participants enroll and stay in the program.
Where you do have the capacity to evaluate all sites, it will be helpful to build into the evaluation a method of comparing them. This will allow you to identify and adopt at all sites methods, conditions, or activities that seem to make one site particularly successful, and to identify and change at all sites methods, conditions, or activities that seem to create barriers to success at others.
If you can't evaluate each site separately, you'll have to decide which one(s) will give you the information that will most help in adjusting and improving your program. If you're most concerned with assessing your overall effectiveness, this may mean evaluating the site(s) closest to the program norm, in terms of methods, conditions, activities, goals, participant/staff interaction, etc. If, on the other hand, your chief consideration is learning whether a particular new or unusual method or situation is working, you may find yourself evaluating the site(s) least like the others.
If sites appear only minimally different, some other considerations that may come into play are:
Sites with different methods, conditions, activities, or services
Programs sometimes are organized so that different methods are used or different services provided at different sites. In other cases, conditions may vary from site to site because of the sites' geographical locations or the available space. The ideal situation is to evaluate all sites and compare the effects of the different methods, conditions, or services. When that's not possible, you'll have to decide what's most important to find out.
If the methods, services, or conditions at a particular site are new or innovative, you may want to evaluate them, rather than those that have a track record. There may be a particular method or service that you want to evaluate, in which case the decision about which site to choose is obvious. The decision should be based on what makes the most sense for your program, and what will give you the best information to improve its effectiveness.
When you have the capacity to choose more than one site to evaluate, it often makes sense to choose two or three sites that are different - especially if each is representative of other sites in the program or of program initiatives - so that you can compare their effectiveness. Even where sites are essentially similar, you'll get more information by evaluating as many as you can.
Another factor to consider is the participants whose behavior, activity, or circumstances will be evaluated. If your program is relatively small this might not be an issue - the participants will simply be all those in the program. However, if you don't have the resources - whether finances, time, or personnel - to evaluate the whole program, there are some situations in which the choice of participants may be important:
Multiple groups
There are a number of reasons why there might be multiple groups of participants in a program. You might start different groups at different times, either because the program has a rolling start schedule (when there are enough people for a class/training group, one will begin), or because the program is aimed at different groups (for example, 5 year-olds, 8-year-olds, and 14-year-olds). You might also be trying different strategies with different groups.
The Brookline Early Education Project (BEEP), a program aimed at school readiness for children aged pre-birth through 5, recruited pregnant families in three cohorts over the course of three years. In addition, families in each cohort were assigned to one of three levels of service. Thus, there were actually nine different groups among BEEP participants, even though, by the third year, all were receiving services at the same time.
Once again, if there's no problem in evaluating the whole program, participants will simply include everyone. If that's not possible, there are a number of potential choices:
Evaluate your work with only one group, with the expectation that work with the others will be evaluated in the future. In this case, you'd probably want to choose the one for whom you consider the program most crucial. They might be at greater risk (of heart attack, of school failure, of homelessness, etc.) or might be experiencing the issue at a high level of intensity (daily shooting incidents in the neighborhood, high rates of teen pregnancy, massive unemployment).
Include a small number (2-4) of groups in your evaluation. You might want to choose groups with contrasting characteristics (different ages, for example, or addressed by different strategies). On the other hand, depending on the focus of your evaluation, you might want groups that are essentially similar, to see whether your work is consistent in its effects.
Choose a few participants from each group to focus your evaluation on. While this won't give you a complete picture, it should give you enough information to tell where your program is accomplishing its goals and where it needs improvement. The differences in the ways participants in different groups respond to the program (assuming there are differences) can also give you ideas for ways to change what you're doing.
Participants from different populations and cultures
Cultural factors can have an enormous effect on participants' responses to a program. They can govern conceptions of social roles, family responsibilities, acceptable and unacceptable behavior, attitudes toward authority (and who constitutes authority), allowable topics of conversation, morality, the role of religion - the list goes on and on. In planning a program that involves members of different populations and cultures, you essentially have three choices:
In any of these instances, it would probably be important to understand how well your approach is working with members of the various populations. If you can evaluate the whole program, make sure that you include enough members of each group so that you can compare results (and their opinions of the program) among them. If your evaluation possibilities are limited, then your choices are similar to those for multiple groups of other kinds, and will depend on what exactly is most useful for you.
There are interactions between the choice of sites and the choice of participants here. You may be concerned about the effects of your program on a particular population, which may be largely concentrated at one site. In that case, if you have limited resources, you may want to evaluate only that site, or that site and one other.
Regardless of other considerations, you may want to set some guidelines about whom you include in the evaluation. How long do people have to be in the program, for instance, before they're included? In other words, what constitutes participation? (This also sets a criterion for who should be counted as a drop-out: anyone who starts, but leaves before meeting the standard for participation.) What about those whose attendance is spotty - a few days here, a few days there, sometimes with weeks in between? Do they have to have attended a certain number of hours to be considered participants?
These issues can be more complex than they seem. People may start and drop out of a program numerous times, and then finally come back and complete it. Many others start programs numerous times, and never complete them. It's usually impossible to tell the difference until someone actually gets to the point of completion, whatever that means for the particular program.
In a reversal of the start-many-times-before-completing scenario, there can be a few people who stay in a program right up till the end and then drop out. This may have to do with the fear of having to cope with success and a change in self-image, or it may simply be a pattern the person has learned to follow, and will have to unlearn before being able to complete the program.
Should any or all of these people be included in or excluded from an evaluation, either before (because of their history in the program) or after the fact? That's a decision you'll have to make, based on what their inclusion or exclusion will tell you. Just be sure that your evaluation clearly describes the criteria that you decide to use for your participants.
Up to this point, we've largely ignored the evaluation difficulties faced by evaluators not directly connected with the organization or institution running the program they're evaluating. If you've been hired or designated by the organization or a funder to evaluate the program, you have to establish trust, both with the organization and its staff and with participants, if you hope to get accurate information to work with. You also have to learn enough in a short period about the community, the organization, the program, and the participants to devise a good evaluation plan, and to analyze the data you and others gather.
If you're an independent researcher - a graduate student, an academic, a journalist - you face even greater obstacles. First, you have to find a place to conduct your research - a program to evaluate - that fits in with your research interests. Then, you have to convince the organization running that program to allow you to do the research.Once you've jumped that hurdle, you're still faced with all the same tasks as an outside evaluator: establishing trust, understanding the context, etc.
Let's look first at the process you as an independent researcher might follow in order to choose and gain access to a setting appropriate to your interests. Once you've gained that access, you've become an outside evaluator, so from that point on, the course of preparing for the evaluation will be the same for both.
If you're an academic or student, you can probably find an appropriate program by asking colleagues, professors, and other researchers at your institution. If none of them knows of one offhand, someone can almost undoubtedly put you in touch with human service agencies and others who will. Other possible sources of information include the Internet, funders, professional associations, health and human service coalitions, and community organizations. Public funding information is often available on the web, in libraries, or in newspaper archives. The wider you spread your net, the more likely you are to find the program you're looking for.
The right program will obviously vary depending on your research interests, but some questions that will inform your choice include:
Once you've found an appropriate setting, you'll have to convince the organization to collaborate with you on an evaluation. The next three steps are directed toward that goal.
Just as you wouldn't go to a job interview without doing some research about the employer, you shouldn't try to gain the cooperation of an organization without knowing something about it - its mission, its goals, whom it serves, who the director and board members are, etc. If someone told you about the organization, she may have, or may know someone who has, much of the information you need. If the organization maintains a website, much of that information will be available there. If it's incorporated, the office of the Secretary of the state of incorporation and/or other state offices will have information about the officers (i.e., the Board of Directors) and other aspects of the organization. Funding agencies may also have information that's a matter of public record, including proposals.
Find out whom (by name as well as position) you should talk to about conducting a research project in the organization you've chosen.
Depending on the organization, this could be the board president, the executive director, or the program director (if the program you're interested in is only part of a larger organization). In any case, it might be wise to involve the program director even if he's not the final decision-maker, since his cooperation will be crucial for the completion of your research.
There are several purposes for this meeting, besides the ultimate one of getting permission and support for your project (or at least an agreement to continue to discuss the possibility). They include:
Assuming that your presentation has been convincing, and you're now the program evaluator, the rest of the steps here apply to both independent researchers and outside evaluators.
This may play out differently for outside evaluators than it does for independent researchers, but it's equally important for both. It means finding out all you can about the community, the organization, the program, and the participants beforehand - the social structure of the community and where participants fit in it, the history of the issue in question, how the organization is viewed, relationships among groups and individuals, community politics, etc.
If you're an outside evaluator, you can pick the brains of program administrators, staff, and participants about the community, the organization, and the issue. Ask them to steer you to others - community leaders, officials, longtime residents, clergy, trusted members of particular groups - who can give you their perspectives as well. If possible, get to know the community physically: walk and/or drive around it, visit businesses, parks, restaurants, the library. Understanding how the issue plays out in the community, the nature of relationships among groups and individuals, and what life is like in the neighborhoods where participants live will help a great deal in analyzing the evaluation of the program.
If you're an independent researcher, learn as much about the context as you can before you contact the program. Websites (for the organization and/or the community) and libraries are two possible sources of information, as are community and organization literature and people who know the community. Learning about the community, the organization, and the participants beforehand will both help you determine whether this program fits with your research and help you advocate for its cooperation with your project. Once you have that cooperation, you can follow the same path as an outside evaluator (since that's what you are) to learn as much about the context of the program as you can.
This can be the most difficult part of an evaluation for someone from outside the organization. There's no magic bullet or predictable timeline, but there are several things you can do:
These steps apply to everyone, internal evaluators as well as external.
Aim for a participatory evaluation
We've discussed above the involvement of all stakeholders to the extent possible. Involving participants, program staff, and other stakeholders in participatory planning and research can often get you the most accurate data, and may give you entry to people and places you normally might not have. On the other hand, participatory planning and research, as we've explained, takes time and energy. If you have limited time, you may not be able to set up a fully participatory project. You can, however, still consult with stakeholders, and involve them in ways that don't necessarily involve training or large amounts of your time. They can help you line up interviews with participants or other important informants, for instance, and/or act as informants themselves about community conditions and relationships.
At least the people in charge of the program, and probably those implementing it as well, will expect to be part of the planning of the evaluation. They are, after all, the ones who need to know whether their work is effective, and how to improve it. Involving participants as well, in roles ranging from informants about context to actual researchers, is likely to enrich the quantity and quality of the information you can obtain.
Plan the evaluation, in collaboration with stakeholders
That collaboration should be at the highest level of participation possible, given the nature of the program, the time available, and the capacity of those involved (if program participants are five-year-olds, they probably have relatively little to contribute to evaluation planning. but their parents might want to be involved.)
The actual planning involves ten different areas, each of which will be the subject of one of the remaining sections in this chapter:
Once the planning is done, it's time to get started on conducting the evaluation. And when you're finished - having analyzed the information and planned and made the changes that were needed - it's time to start the process again, so that you can determine whether those changes had the effects you intended. Evaluation, like so much of community work, is a process that goes on as long as the work itself does. It's absolutely essential to the continued improvement of your program.
Choosing evaluation questions - the areas in your work you'll examine as part of your evaluation of your program - is key to defining exactly what it is you're trying to accomplish. For that reason, those questions should be chosen carefully as part of the planning process for the program itself, so that the questions can guide your work as well as your evaluation of it. The more that stakeholders can be involved in that choice and planning, the more likely you are to create a program that successfully meets its goals serving the community.
Choosing those questions well entails understanding the context of the program - the community, participants, the culture of any groups involved, the history of the issue and of the social structure of the community and the organization - and (if you're an outside evaluator without ties to the program) establishing trust with administrators, staff members, and participants. That trust will enable you to conduct a participatory evaluation that draws on the knowledge and talents of all stakeholders, and to plan an evaluation that fits the goals of the program and accurately analyzes its strengths and weaknesses. With that analysis in hand, you'll be able to make changes to improve the program. Then you're ready to start the whole process again, so you can evaluate the effects of the changes you've made.