We live in an age when elected officials, the media, and the public all insist on accountability. They
demand quality services in return for their financial support. With so many possible uses for our funds, how do we decide which programs are truly worthwhile?

Why Do An Evaluation?

Which of our services are producing adequate results? Which are not? Who is being helped by these services? Who is not? Where are improvements needed? Program evaluations can give good, valid answers to these questions. The key question is, What does your program intend to accomplish? The answer should be in your mission statement. An evaluation program will tell you what is actually being
accomplished, so you can see how your intentions and performance match up.

Evaluation Methods

Here are at least three ways to evaluate your organization=s program:
1. Outcome Monitoring is the regular reporting of program results in ways that can be understood and
judged. Outcome monitoring keeps those responsible apprised of performance, allows problems to be
detected (and corrected) early, provides proof about program effectiveness, and boosts confidence in
the organization's ability to perform.

Since too much data can hide pertinent information, it is recommended that you monitor only a few key measures that will focus evaluators= attention on data relevant to program management. These measures should be easy to interpret and tied to performance expectations.

For example, let's say your organization is concerned with elementary education, and one of your goals is to improve the ability of children to learn a particular type of information. To measure the outcome of your work, you could give the children a very simple test before they start your program, then administer the same test at the end of the program. Comparing the results of the two tests should help you determine if your program is functioning as it should.

2. Surveys are another good way to collect data for program evaluations. Surveys can help you collect statistically reliable data by asking your clients to rate the services they have received. To obtain quality survey results, you must choose your questions carefully, making sure that each one solicits
exactly the type of response that will help you evaluate your program.

3. Benefit-Cost Analysis attempts to assess service programs by determining whether total welfare has increased because of the program. To perform such an analysis, you need to:
-- Determine the benefits of the program,
-- Place a dollar value on each benefit,
-- Calculate the total costs of the program,
-- Compare the benefits and the costs.

Usually, the most difficult aspect of this analysis is placing a dollar value on the benefits. For example, what is the dollar value of saving a human life?

Data Collection Methods

Each organization needs to determine what data collection method serves its needs best. After determining what performance you want to measure, select the easiest, most practical data collection method that will provide the information for your evaluation. One or more of the following may be appropriate for your organization. If you=re unsure about which ones will work best for you, don't hesitate to ask a SCORE counselor for help.

1. Use of Technical Equipment: Data collected directly from a physical device or technical equipment. (Example: computer recordings)

2. Indirect Unobtrusive Measures: Indicators obtained from records kept for other purposes, or from physical traces left by normal activities. (Example: sales records of heart healthy foods sold in the cafeteria)

3. Direct Observation: Use by a trained observer of prespecified formats and codes. (Example: street-corner observations of number of drivers wearing seat belts)

4. Activity or Participation Log: Brief record completed on site at frequent intervals by participant or deliverer, using format designed by evaluator. (Examples: participant's sign-in log, daily record of food eaten)

5. Organizational Records: Data collection forms routinely kept by an organization for purposes other than for the evaluation. (Examples: patient medical records, time sheets of staff members who record amount of time spent on different activities)

6. Written Questionnaires: Written survey, usually with prestructured questions, to obtain data by mail or in-person from providers or recipients. (Examples: number of different activities each participant engaged in during an intervention, provider=s assessment of amount of time they spent on each activity)

7. Telephone or In-Person Interviews: Procedure in which interviewer asks questions directly to providers or recipients, using either prestructured or open-ended questions. (Example: interviews with participants in a work-training program concerning training activities and their relevance to job aspirations)

8. Case Studies: Collection of multiple types of data about a site or example entity, usually by an observer who is on site and uses informal observations and interviews, combined with available data and document review. (Example: case studies of states in their process of implementing a program of systemic change in mathematics education)

Much of the information above was taken from the 633-page Handbook of Practical Program Evaluation, by Joseph S. Wholey, Harry P. Hatry and Kathryn E. Newcomber. This text, as well as several other books on Program Evaluation, is available at the Lawson McGhee Library.

The material in this publication is based on work supported by the U.S. Small Business Administration under cooperative agreement SBAHG-04-S-0001. Any opinions, findings and conclusions or recommendations expressed in this publication are those of the author and do not necessarily reflect the views of the U.S. Small Business Administration.
Update May 2006
George Hannye