Education Project Evaluation - Design an Evaluation

photo of teachers in a tidepool

Tool: Evaluation Design Checklist

An evaluation plan documents the details of your evaluation design-what information you need to make informed decisions and how you will go about gathering that information. Evaluation design is often an iterative process that prioritizes the evaluation questions based on the resources and time available. No one has all the funding or time they need to answer all the evaluation questions they have. To help you detail your evaluation plan (which you outlined in your California B-WET grant proposal), download this checklist.

Evaluation Design Details

Statement of Purpose

Develop a written statement of purpose for your evaluation by completing this sentence: "This evaluation will provide [which decision makers] with [what information] in order to [make which decisions] about [which project or issue]." This statement will keep you focused on decision-making, which is the hallmark of evaluation as opposed to research.

Evaluation Goals

Once you have a statement of purpose develop a list of evaluation goals, usually in the form of issues/questions that you want the evaluation to answer. Do you want to know if participants liked the program, found it useful and interesting? Do you have questions about audience learning, changes in attitudes or abilities/skills? Do you want to know your project's impact on participants, on the community, on the environment? Do you want to know how your project compares to other similar projects?

Based on the literature review for student-focused meaningful watershed experiences, your evaluation might provide information about changes in:

  • learning, not only in science but other subjects as well
  • critical thinking skills
  • environmental concerns and attitudes, especially about local environments
  • environmental identity/connectedness
  • issue-investigation and environmental-action skills
  • conservation actions/behaviors
  • desire to spend more time outdoors/in the environment
  • concerns and feelings of responsibility for the welfare of others or their community
  • self-efficacy, feelings of competence.

Based on the literature review for teacher-focused meaningful watershed experiences, your evaluation might provide information about changes in:

  • beliefs regarding teaching practice and their classroom teaching
  • competence and interests in science/environmental topics
  • understanding of environmental science concepts and issues, as well as abilities to conduct research-based field studies
  • environmental concerns and attitudes, especially about local environments
  • environmental identity/connectedness
  • conservation actions/behaviors.

Audience Definition/Description

Who can provide the answers to the questions you want answered and what do you know about them?

  • Who are they and where are they? How do you contact them?
  • How would you like them to participate?
  • How will the audience react to being evaluated?
  • Are there ethical issues or human subject research issues?
  • Are there other audience issues (age, education level, technology access, language, culture/ethnicity, etc.) which may influence data collection?
  • How will your audiences' characteristics influence your evaluation plan and methods, choice of evaluation tools (instruments) and/or the design of instruments?

Outcomes

To measure changes it's important to clearly state what changes you want your audience to make. Many project managers and program developers struggle with defining outcomes and distinguishing between objectives, outputs and outcomes. Simply stated, outcomes are what your audience achieves as a result of your outputs.

Objectives are measurable statements of what learners should be able to achieve (know, feel, do) by the end of the learning experience. They're usually stated in the format: The [learner] will be able to do [what] by [when] and [how well]. The "what" can be cognitive, affective or psychomotor/skills (and for environmental or conservation education projects you should have objectives for all three learning domains.) Objectives are written at the beginning of your project to help you plan your activities and outputs.

Activities/outputs are the instruction, products and/or services you deliver during the project. Your outputs are what you do to facilitate the achievement of objectives by your audience.

Outcomes are the results in terms of what your audience achieves after participating in your project. Outcomes can be short-term or long-term, intentional or unintentional.

Below are examples of objectives, outputs and outcomes.

Objectives: Examples (measurable)

  • Participants will express a sense of responsibility and appreciation [during interviews, in their journals, etc.].
  • Participants will show their personal investment in the local marine environment by...[complete with measurable action].
  • Teachers will show they value intertidal ecosystems by...[complete with measurable attitude or action].
  • Students will be able to conduct water quality measurements and explain the significance of their results.
  • Students will show they're more knowledgeable of their local natural environment and describe several actions they can take as environmental stewards.

Objectives: Non-Examples (not measurable)

  • Foster a sense of responsibility and appreciation
  • Instill a sense of personal investment
  • Provide professional development about the value of intertidal systems
  • Teach students to conduct water quality measurements and the significance of their readings
  • Increase students' awareness of and stewardship for their local natural environment
  • Provide students & teachers with knowledge about  indicators of stream and watershed health.

Outputs: Examples

  • We'll introduce students to life in the bay.
  • We'll provide an opportunity for participants to spend a day each month collecting and analyzing field samples.
  • We'll engage students in restoration projects.
  • We'll involve students in "action projects" designed to teach watershed stewardship.

Outcomes: Examples

Participants will

  • describe their watershed
  • show they can collect and analyze field samples
  • explain the results of the field samples
  • describe the importance of collecting field samples
  • explain what restoration is and why they're doing it
  • volunteer to participate in restoration activities during the year after the project.

If you write your objectives clearly at the beginning of your project, they will help guide you when defining project outcomes.

Methods

The next stage of your evaluation design is to decide:

  • how you'll collect data (evaluation methods and instruments)
  • from whom (some or all of your audience)

and

  • when (before, during and/or after your project).

The best evaluation methods balance the your audience's abilities and your needs as well as abilities. To determine the best method(s) for your project, start by talking to colleagues and reviewing the research literature. Find out who has been interested in the same outcomes and what methods they have found most useful. Use the Getting Help website links to find studies that are available to you via the Internet.

Common methods include:

  • Observations by people
  • Observations via media (audio, video, electronic/magnetic devices)
  • Telephone surveys (usually formal and highly structured)
  • In-person interviews (can be informal or structured)
  • Focus groups (usually informal, but structured)
  • Panel series (can be informal or structured)
  • Self-administered surveys or tests (via mail, newspaper/magazine, online)
  • Rubrics (a scoring guide used to quantify subjective assessments)
  • Concept maps (a cognitive map; a graphic representation of how someone views the relationship between concepts)

There are many other evaluation methods, which you can explore by visitingWikipedia: Evaluation Methods.

No method is perfect—they all have advantages and disadvantages. The rule of thumb is to choose the method or methods that are the easiest and the least expensive ways to provide you with answers you have about your audience. And, as per the definition of evaluation, you need to be systematic in the way you gather your data so that you're reporting back results, not impressions or anecdotes.

Systematic Data Collection: Examples

  • using a rubric, observe a sample of students in the field
  • grade students lab work and homework
  • have audience members complete a feedback form after seeing students' posters/presentations
  • provide criteria for students' results and post only those on the website that meet the criteria
  • interview teachers on the use of the teaching materials
  • using a rubric, observe teachers as they use the teaching materials

Systematic Data Collection: Non-Examples

  • observe students in the field
  • review students lab work and homework
  • have students present a poster or give a presentation on what they learned
  • post students' results on a website
  • provide teachers with quality teaching materials


Developing A Survey/Interview Instrument

Tool: Tips Sheet for Developing an Instrument

Note: In evaluation lingo, a survey form or set of interview questions is called an instrument or measure.

Individual questions are called items.

Click here for tips for drafting and finalizing the instrument you'll be using to collect data.


To Sample or Not

Unless your target population is very small (30 students who participated in a program), or you have a healthy budget and plenty of time, you will be studying a sample of your population instead of the entire population. To enable you to use a sample to represent the population, you must be systematic in your choice of the sample.

Sample size

Make your sample size as large as you can afford in terms of time and money. The larger the sample the more you can expect it to reflect accurately what you would obtain by testing everyone.

Rules of Thumb

Population Size                 Sample Size

50 or less                             50 or less

500 or less                           approx. 200

1,000 or less                        approx. 275

10,000+                                approx. 350

U.S. population                  2,000 to 4,000

From Fitz-Gibbon, C.T. & Morris, L.L. (1987). How To Design A Program Evaluation. Newbury Park, CA: Sage Publications.

Sampling

Whatever the sample size, you must take care to ensure that it adequately represents the range of actual opinions or abilities in the larger population. A sample is said to represent the population if members of the sample are selected randomly from the population. That is, every person in the population has an equal chance of being selected to complete your instrument. To select randomly you can:

  • number everyone in the population and use a random numbers table to select individuals
  • draw names from a hat, like a lottery.

If the sample is too large or unwieldy to use the random sampling methods above, you can use a systematic random sampling method. To select randomly this way:

  • determine how many interviews/surveys you need from a list of program participants, then take your telephone list and count off participants (every fifth or twenty-fifth, depending on the total number and the number you need) to select individuals.

When the population consists of a number of subgroups (or strata) that may differ in the characteristics that interest you (such as grade level, or number of years teaching, or place of residence), use stratified sampling:

  • identify your strata of interest, then draw a sample from each strata.

Whatever you do, be consistent. Consistent errors are easier to find and mitigate than inconsistent ones.

For more about samples and sampling, visit Columbia University's Center for New Media Teaching and Learning e-Lesson Samples & Sampling .

EXAMPLE

Evaluation Plan

Click here for an example of a teacher program evaluation plan

Click here for an example of a student program evaluation plan

EXAMPLE

Instruments

If you're interested in examples of evaluation instruments, take a look at what's offered on these websites. They probably do not have instruments that fit your needs exactly, but they will give you ideas on how to word items (questions) and format your instruments.

Online Evaluation Resource Library (OERL): Instruments

Place-Based Education Evaluation Collaborative: Tools

EXAMPLE

Evaluation Items Database

Based on the literature review we've pulled together a database of survey questions (items) selected from published scales, which match California B-WET content and goals. These items are offered here to help you develop your own survey forms (instruments). This is not a survey. To download the items database, click here.

To learn more about how the questions in this database were developed and how to interpret responses to them, see the original sources (below).

References: Scales for measuring impact of environmental programs

Clayton, S. (2003). Environmental identity: A conceptual and an operational definition. In S. Clayton & S. Opotow (Eds.), Identity and the Natural Environment: The psychological significance of nature (pp. 45-65). Cambridge, MA: The MIT Press.

Cordano, M., Welcomer, S.A. & Scherer, R.F. (2003). An analysis of the predictive validity of the New Ecological Paradigm Scale. The Journal of Environmental Education, 34(3), 22-28.

Gotch, C. & Hall, T. (2004). Understanding nature-related behaviors among children through a Theory of Reasoned Action approach. Environmental Education Research, 10(2), 157-177.

Kals, E. & Ittner, H. (2003). Children's environmental identity: Indicators and behavioral impacts. In S. Clayton & S. Opotow (Eds.), Identity and the Natural Environment: The psychological significance of nature (pp. 135-157). Cambridge, MA: The MIT Press.

Leeming, F.C., Dwyer, W.O. & Bracken, B.A. (1995). Children's Environmental Attitude and Knowledge Scale: Construction and validation. The Journal of Environmental Education, 26(3), 22-31.

Manoli, C.C., Johnson, B. & Dunlap, R. (2005, April 11). Assessing children's views of the environment: Modifying the New Ecological Paradigm Scale for use with children. Paper presented at the Annual meeting of the American Educational Research Association (AERA), Montreal, Quebec.

Mayer, F.S. & Frantz, C.M. (2004). The connectedness to nature scale: A measure of individuals' feeling in community with nature. Journal of Environmental Psychology, 24, 503-515.

Morrone, M., Mancl, K. & Carr, K. (2001). Development of a metric to test group differences in ecological knowledge as one component of environmental literacy. The Journal of Environmental Education, 32(4), 33-42.

Thompson, S.C.G. & Barton, M.A. (1994). Ecocentric and anthropocentric attitudes toward the environment. Journal of Environmental Psychology, 14, 149-157.

Vaske, J.J. & Kobrin, K.C. (2001). Place attachment and environmentally responsible behavior. The Journal of Environmental Education, 32(4), 16-21.

Zimmermann, L.K. (1996). The development of an environmental values short form. The Journal of Environmental Education, 28(1), 32-37.