Education Project Evaluation
Plan an Evaluation
These are the California B-WET grant requirements for your project evaluation.
Project Evaluation Criteria and Technical merit: Evaluation (10 points)
Tool: Evaluation Plan Assessment (Reviewers' Rubric)
Use this rubric to assess the evaluation section of your California B-WET proposal.
This rubric is based on the five questions in the evaluation section assessment from the B-WET RFP and is the one that reviewers will use to score the evaluation section of your grant. To use the rubric, read through your proposal's evaluation section, then read each question below and the directions for scoring on a scale from 3 to 0 and for tallying your score. A top score is 10 points.
Reviewers' Rubric pdf
Examples: Proposal Evaluation Section
These are provided as examples of a thoughtful and appropriate evaluation section for a B-WET grant proposal.
We will evaluate our project to improve its design and assess its effectiveness in reaching our education objectives. At the start of the grant we will conduct a front-end evaluation (likely via a survey) of program participants (high-school and college students) to determine their prior knowledge and attitudes about the topics they will study. At the end of the summer workshop and twice during the school year, all participants will repeat relevant portions of the front-end survey so we can track changes in knowledge, attitudes and possibly stewardship actions immediately after their experiences as well as over time (a time series evaluation design). At the end of the grant we will report the summative impact of this project on participants.
The broad questions we will attempt to answer with this evaluation are:
- Do participants’ understanding of the local watershed system improve and do they feel better connected to the local system? Are the topics presented during the workshop relevant to participants? Do they inspire stewardship interests/actions?
- What impact does the program have on participants’ aspirations toward advanced schooling, interest in/pursuit of science careers, and stewardship actions?
- What aspects of this program (content, pedagogy, field experiences and/or mentoring) do participants report as having the greatest impacts of them?
The target audience for this evaluation is teachers participating in a weeklong summer workshop. The main questions for the evaluation are:
- Which aspects of the workshop worked well? Which didn’t work? What changes would improve the workshop?
- What’s the impact of the workshop on the teachers who attend? Does it change their knowledge or how to teach the content, and in what ways?
- What impact does the workshop have on classroom practice? Do the teachers use what they learned in the workshop?
All teacher participants would complete a pre-workshop survey. At the end of each day, teachers would provide feedback on the day’s events and activities via a feedback form. On the last day of the workshop, teachers would complete a post-workshop survey (one similar to the pre-workshop survey).
The survey and feedback forms would use a mix of questions to collect qualitative and quantitative data. Data from the pre-workshop survey would be compared to responses on the post-workshop survey to track changes in participants over the week. Responses on the feedback forms will help us improve the delivery of the program.
Approximately 6 months after the workshop, teacher participants will be asked to complete an online or telephone survey to determine the workshop’s impacts on their teaching and their use of the materials we provided.
Our main evaluation question is: What’s the impact of the program on students? We want to know if, at the end of the school year, students know/understand more about watersheds than they did at the beginning of the year.
This evaluation will focus on program participantsmiddle school students. We plan to collect data using two methods: 1) student concept maps on the concept “watershed” and
2) a survey or interview of teachers about the program and any other watershed-related activities they may have used in the classroom, as well as background information on their classes, teaching experience and experience with concept maps.
A concept map is a knowledge representation tool developed by Joseph Novak and associates in the 1970s (Novak, 1998, p. 27). Concepts maps “met a need for an evaluation tool that can show easily and precisely changes in students’ conceptual understanding” (Novak, 1998, p. 192). This evaluation method has been found to be reliable and valid by Novak and other researchers (Novak, 1998).
We plan to use the concept mapping technique as a pre-test in the winter, before students have experienced the program. And then use the same technique late in the school year to post-test students on the same concept. We will gain parents’ permission for students to participate in this evaluation. For students whose parents do not grant permission, we have an art activity for them to complete at test time. Each test session should take about an hour. During that time we plan to train students on how to create a concept map using a non-watershed concept, then we’ll ask them to construct a concept map on the concept “watershed.”
We intend to use Novak’s scoring criteria (Novak, 1984, pg. 105-108) to derive a numerical score for the concept maps for each student. Then we’ll use statistical analyses to compare students’ pre-test and post-test scores (statistical tests to be determined). Our hypothesis is that the program will have a positive impact on students’ knowledge about the local watershed.
Novak, J.D. (1998). Learning, Creating, and Using Knowledge: Concept maps as facilitative tools in schools and corporations. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Novak, J.D. & Gowin, D.B. (1984). Learning How to Learn. Cambridge: Cambridge University Press.
Of our project objectives, our evaluation will focus on assessing our success at increasing participants’ knowledge about the local watershed, engaging them in quality scientific investigations and increasing their stewardship actions toward the watershed. We will use a mixed-methods evaluation strategy (surveys and observations) to provide us with formative data so we can improve our program and summative data to show our program’s impact. The principal investigators will implement the evaluation with Dr. XYZ serving as our advisor on the development of assessment instruments, the analysis of data and reporting of results.
We will start on the first day of the program with a pre-program survey of participants’ knowledge of local watersheds and their actions regarding wetland conservation. We will conduct a nearly identical post-program survey during the final day of the program. During each field work session, two trained interns will use an observation instrument to assess the quality of participants’ field investigations. The observers will share their findings with the staff after each session so we can improve how we conduct the field work with participants. Finally, at the end of the program, we will ask participants to complete a survey about their intended actions regarding conservation of the local watershed.
Pre- and post-survey data will be compiled and compared to determine the impact of the program on participants. All the evaluation data collected will be summarized and included in our final report.
Non-Examples: Proposal Evaluation Section
These are provided as examples of a poorly planned or articulated evaluation section for a B-WET grant proposal.
We will evaluate our project by observing the students during the program and discussing our observations at regular staff meetings. Parents will also complete an assessment tool to provide feedback.
We will hire an outside evaluator to develop online surveys that both teachers and students will complete. Students will also keep journals and present final projects, which will encompass all they've learned during the program. Evaluation will be both formative and summative with the evaluator compiling and reporting on the results.
By documenting the number of people who attend the program each month we will assess its success. At the end of each program we will ask participants to tell us what they thought about the program and suggest ways to improve it. We will also ask them to tell us one thing they learned from that program. This information will help us improve the program so that it better meets our objectives and participants' needs.
Logic Model: A Planning Tool
Attached is a logic model template to help you plan your project and its evaluation. This is a good tool to help you think about your project as a whole. The first column includes your objectives, that is, how you believe your audience will be different after they participate in your project. The next three columns are where you list what you will provide/do in order to meet your objectives. The final two columns are your outcomes-how your audience will be different immediately after participating in your project and in the not-to-distant future.
What is a logic model?
A logic model is a systematic and visual way to present and share your understanding of the relationships among the resources you have to operate your program, the activities you plan to do, and the changes or results you hope to achieve.
(see Online Resources: W.K. Kellogg Foundation, 2001, p. 1)
A logic model is a "picture" of how your project will work. Logic models link project outcomes (short-, intermediate- and long-term) with project activities, outputs and inputs (or resources) &.Logic models provide a road map of your project, showing how it is expected to work, the logical order of activities and how the desired outcomes will be achieved. (NOAA Coastal Services Center, Project Design and Evaluation Workshop Handbook, p. 63)
Why use it?
The purpose of a logic model is to provide stakeholders with a road map describing the sequence of related events connecting the need for the planned program with the programs' desired results. Mapping a project helps you visualize and understand how human and financial investments can contribute to achieving your intended program goals and can lead to program improvements.
(See Online Resources: W.K. Kellogg Foundation, 2001, pg. 3)
- Helps you and colleagues/partners link all the components together on the same page
- Helps project/program designers differentiate between objectives and activities, between outputs and outcomes
- Helps managers/stakeholders see how all the components fit together
- Aids decision making about resources and activities
- Helps managers determine where resources will go to achieve impacts
- Sets up project/program so that it's easier to evaluate
- Helps individuals see how they contribute to the project/program
- Funders are starting to request them.
- Something else to do
- Something new to learn
- Takes time and thought
- Have to think through project/program before jumping into doing activities
- Could make you more accountable for what you do