Think about the following questions as you read Chapter Nine:
When you finish reading, you should be able to answer the questions as they relate to the scenario as well as to the chapter material.
All the efforts you have expended up to this point in designing and conducting the evaluation are of little consequence if you do not communicate the findings to the people who need it. Your reporting can be verbal or written, informal or formal, rough or polished. If you have completed only a formative evaluation, your reporting could be in the form of verbal comments provided to program staff during a staff meeting. However, the findings could also be written in a formal interim evaluation report that the program director could submit to the funding source. Although staff may expect interim reports, whether written or oral, federal grants frequently demand them in writing. A summative or final evaluation report is usually a written one and comes at the end of a program cycle.
Keep in mind that as the evaluator, you were appointed or hired by someone (that is, program director, funding agency, or others). You direct and submit your evaluation report to that person or entity. As a rule, if the sponsor has contracted you, write the report to address the sponsor's needs. If you were contracted by the program director or staff, write the report addressing their needs.
If you are writing an evaluation report for the program director, make it clear that this is primarily for his or her use. Many program directors will take your final evaluation report and submit it as their final report on the program to the sponsor. Program directors who know better will incorporate your report into their final report to the sponsor. Why? Because the evaluator's report is supposed to be objective in nature whether or not the evaluator is a part of the program. The findings may be supportive or critical of the program's operations. The program director needs to address the findings and attempt to explain them in the final report to the sponsor.
One more point about the evaluator's report: the data or information gathered can far exceed the scope of the evaluation, as you will see when you read the sample evaluation report at the end of the book. However, for the evaluation to be meaningful, the report must stick to the questions on which it was based.
Most evaluations will call for a report that summarizes the history of the program as well as the goals, the methodology of the evaluation, findings, interpretations, conclusions, and recommendations. Summative evaluation reports need supporting tables, graphs, charts, or case studies that address targeted results. The important question that dictates the orientation of your reporting is, Who is your audience for the report? The answer provides you with a focus for your interpretations and recommendations. (See the Sample Evaluation Report for the Zoo in the Community program at the end of the book for an example.)
Depending on the evaluation audience (or focus), the style and length of the report will vary. An in-house evaluation to improve the way a program is implemented will require a different report from the summative evaluation report for a funding source.
Throughout the report preparation, the evaluator uses communication strategies that are appropriate to the decision-making audience. For example, before an in-house evaluation report reaches its final draft, your strategies might include circulating a draft report among staff or colleagues or participants with important findings that may or may not be included in the final report. The findings might be included in the draft in order to raise questions that go beyond the scope of the evaluation and to provoke future discussion (Wholey, Hatry, and Newcomer, 1994; 2004). Others’ perspectives gleaned from responses to earlier drafts can even be included in the final draft.
How do you determine the focus of the report? The easiest way is to return to your evaluator's program description and the evaluation questions found there. If, for example, the funding agency wants to know the efficacy of having increased the funding by 10 percent, your report immediately has a focus. As illustrated in Table 9.1, the questions asked at the formative and summative evaluation stages in the program planning cycle, and the kinds of questions addressed in a comprehensive report may help you to find one or more focal points for your audience. Sometimes you will need to write several different reports to address the needs, for example, of a funding agency, a school board, teachers, and parents.
Table 9.1 Focus Points for Evaluation Reports.
Program Planning Stage | Philosophy and Goals | Needs Analysis (Participants) | Program Planning | Program Implementation | Evaluation (for Yourself and Others) |
Focus Point | Return on Investment (ROI) | Client Needs | Activities | Results | Evaluation Design |
Formative (Process) | An agency may be looking at outcomes in terms of how they addressed their goals and objectives. Were the results in line with their mission statement? | How well did the program address the needs? To what extent has the program helped clients? What effect has program had on client system? (Can use satisfaction measures in the report.) Did procedures (activities) address specified needs of clients? Did the client needs change as they went through procedures? | Were procedures appropriate, planned to address needs? Did procedures include contingency plans? Did they allow for altering or substituting a different activity if necessary? Or getting a second class going in the midst of the program? | To what extent was the program effectively or efficiently conducted? Did the procedures work efficiently and effectively? Did it go smoothly, leading to total implementation as scheduled? | Did the evaluation design look at the kinds of things the program needed to look at? Did it address the questions that needed to be addressed? Did the procedures collect appropriate data and collect them appropriately? |
Summative (Product) | Were the procedures in line with acceptable program practices? | Did the results of the program significantly affect the client population? For example, did the program teach new managers to write? | Did the results occur as a direct result of the activities, or can't you say that? | Did the evaluation address the hard questions that needed to be addressed? Were any biases entered into the procedure? Did the evaluator help to change activities as the program went on? | |
Report to | Powers That Be | Director
Staff Clients Community |
Director
Staff |
Sponsor
Director Staff Community |
Director |
Report Content | Report on outcomes of program and whether outcomes were in line with program practices. Can recommend alteration of processes, of entire program, or of specific objectives, or entire goals of the agency. | Report on the extent to which outcomes and procedures fulfilled the needs of clients, regardless of the organization's goals. | Report on the appropriateness of planned program and activities as well as meeting the needs of clients and the mission of the agency or organization.
Answer this question: Did the program plan have an impact on both clients and agency? Recommend alterations of activities or the entire program as well as adoption or adaptation of activities. |
Report on effectiveness of planned activities, efficiency of staff and the activities, and the extent of impact on clients. Can recommend alterations to activities and/or staff, staff development, and client preparation. (At what levels should they be included or excluded from treatment?) Sponsors want these answers. | Report on evaluator's effectiveness, evaluation's appropriateness, and the design's rigorousness. Could recommend alteration of any part of the design or alteration of the evaluator. |
To stay organized, you can use an outline that will direct your report writing. Although it represents just one of several possible formats, the following list offers a reliable one. A discussion of each section follows. Note that the Zoo in the Community evaluation (the sample report at the end of the book) uses a slightly different and abbreviated format because it was prepared for the project director, who already had many of the facts regarding the history and background of the program.
Outline of an Evaluation Report | ||
Suggested Organization | Section Title | Suggested Order for Writing the Sections |
Section 1 | Summary | Last |
Section 2 | Purposes of the evaluation | First |
Section 3 | Background information concerning the program | Second |
Section 4 | Description of the evaluation study and design | Third |
Section 5 | Results | Fourth |
Section 6 | Discussion of the program and its results | Fifth |
Section 7 | Conclusions and recommendations | Sixth |
As a brief overview of the evaluation report, this section summarizes the purpose of the evaluation, gives a history of findings from previous evaluations (if any), and lists major conclusions and recommendations. Designed for the person too busy to read the full report, the summary should be no more than one or two pages long. Although the summary should appear at the beginning of the report, it is written last so that the evaluator has the benefit of all interpretations, conclusions, and recommendations he or she will make.
This section could be a few paragraphs or a chapter, depending on your needs. It describes what the evaluation did and did not intend to accomplish. In effect, this section describes the assignment that the evaluator accepted, and as such it could probably be prepared immediately after the evaluator accepts the assignment. A draft of this statement can then be agreed upon by all interested parties and kept on file.
Section Two stems from the evaluator's program description and addresses the following questions:
This section sets the program in context, describing how the program was initiated and what it was supposed to do. If the evaluation audience consists of individuals who have no knowledge of the program, this section needs to be detailed. But if people who are familiar with the program will read the evaluation report, then Section Three can be a brief setting down of facts “for the record.”
A draft of Section Three developed in the planning stages of the evaluation will ensure that the evaluator has a clear grasp of the program, including what is and is not supposed to be accomplished. The draft could then be circulated to program personnel for their comments. Typical content might address the following questions:
Section Four describes the methodology of the evaluation. The description includes the evaluation design for each evaluation question. In order to engender faith in the conclusions of the evaluation, you need to include every detail about how the information was obtained. Include a discussion of the evaluation model used (discrepancy, goal-free, or other), the sample, and the instruments, plus how you collected and analyzed the data.
Depending on the model you followed, the nature of the design reporting may change. For example, if you followed a goal-based model, your collection and analysis procedures would be described in terms of their aim at objectives presented in the program description or proposal. If, however, you were following a goal-free model, you would describe data collection instances and opportunities that arose during the evaluation. Interpretations and conclusions would be based on objective attainment for the goal-based model or on key accomplishments and shortcomings for the goal-free model.
A draft of Section Four should be written as the evaluation is being planned so that it can be circulated to the program personnel for their comments. Obviously, obtaining agreement beforehand about what will constitute a fair measure of the program will increase the credibility of the results.
This section presents the data that were collected from the various data sources. If the sources were reliable and valid, these become the “hard data” that people talk about. In addition, Section Five may include some “soft data” such as anecdotal evidence or testimonials about the program. Also include unexpected effects—things that staff did not anticipate but that nevertheless occurred.
The results section should be written after all the data have been analyzed for content (in the case of interviews), recorded in tables, graphed or plotted, and tested for significance where appropriate. Test scores are usually presented in tables showing means and standard deviations for each group. (See Appendix A for details.) Results of questionnaires are frequently summarized on a facsimile of the questionnaire itself.
In the discussion section of the report, you want to interpret the evaluation findings. What do the findings say? What are the implications for clients, sponsors, staff? How did the results affect the students, trainees, or others?
Conceivably, Section Six could be included in Section Five along with information about the results. However, if the program or evaluation is quite complicated, it may be preferable to have a separate section for interpreting and discussing the results. The results should be discussed with particular reference to Section Two, the purposes of the evaluation. Also, here is where you, as the evaluator, have an opportunity to attempt an explanation of the findings—both positive and negative. Typical content includes answers to the following questions:
Either at the beginning or end of Section Six, if you can, provide a cost-benefit summary table in which both dollar and nondollar costs and benefits are listed. Distinguish operating costs from start-up costs, because the latter will not be incurred if the program is repeated. And be sure to distinguish costs and benefits for which the evaluation has produced sound evidence from suspected costs or benefits that have not been substantiated by objective procedures.
You may want to present this last section in the form of a list rather than in narrative form. Whatever the format, as evaluator you will advance recommendations for future steps, the short-term and long-term actions that will improve the program even further. Because this section is the most influential part of the report, you need to emphasize what is important and make clear what conclusions must be tentatively rather than firmly held. Take care, too, that this section attends to all the concerns raised in Section Two, which described in detail the purpose of the evaluation.
Some evaluators feel that they should not make recommendations at all but merely report evaluation data to decision makers. Others feel that the evaluator should make recommendations. In actual practice, the evaluator is frequently asked to make recommendations regarding the program or subsequent evaluations of the program or both. For example, should some instruments be modified or discarded for subsequent evaluations? Should a different design be used in a future evaluation?
The evaluator has an ethical responsibility to make explicit the value base upon which recommendations are made. For example, the evaluator who believes in the merits of the developmental view of learning might make different recommendations than one who advocates objectives-based instruction. More than likely, you were initially selected as the evaluator because of your expertise in both evaluation methodology and in the professional arena of the program, and the recommendations section is where you are really earning your money. Thus the contractor has an expectation that your recommendations for program improvement will be grounded in the best practices currently in the field. The evaluator, therefore, needs to clearly state the perspective from which recommendations are made.
The program planning team may also wish to make recommendations. Before writing this final section of the report, therefore, the evaluator might distribute copies of the results of the evaluation to the program staff. As a group, they can discuss the implications of the results with the evaluator, identifying those recommendations that are supported by the data. Both the staff's and the evaluator's recommendations can then be incorporated into this final section.
As a final caution, evaluators know that objectivity is crucial. Most evaluation reports will contain both positive and negative findings and sometimes findings that are not absolutely certain. Events outside the evaluation may obstruct or hinder the data collection. These events need to be included and explained fully.
A hard reality of the evaluation game is that as an evaluator, you are a “hired hand.” As an external evaluator, you will downplay this point so that you have access to authentic and credible data. Objectivity is the goal and selling yourself as a professional who is performing a task that will benefit the program is key to a successful evaluation. Or, as an employee who works on the program, you may have another, “outside” role as the evaluator. For example, as you read in the Chapter Nine scenario, Judy Hallowell will not write the final report on the Grandview program. A Grandview staff member (or subcommittee) will do that. That person or entity will need to stress this outside role so that people will buy into the author's objectivity. Regardless of the model you follow, objectivity will make the evaluation results useful and credible to the audience.
Regardless of how credible and objective you become, however, if no one can read or understand your final reporting, you are ineffective as an evaluator. That reporting must be aimed at addressing questions that were posed early on by the staff, sponsors, or clients. You will be deemed a good evaluator according to your ability to address answers to those questions, keyed to the findings of the evaluation, in a format appropriate to the audience.
Interim evaluation report: A report, delivered either orally or in written form, written during the development or improvement of a program or project; prepared for the in-house staff
Summative evaluation report: A report written and delivered after the completion of a program or project for a funding agency, management, or other decision maker
Focus of the report: The chief concern of the audience for the report
Now that you have read Chapter Nine, return to the questions that you were asked to keep in mind at the beginning of the chapter:
Answer those questions in two ways:
Were Ruth's concerns about writing a report for the administration legitimate ones?
How do Ruth's concerns play right into the chapter's discussion of focus?
Who are the other stakeholders interested in seeing the evaluation report?