Key evaluations terms: See table on page 45.
Key Evaluation Terms
Baseline study – An analysis describing the situation prior to an intervention, against which progress can be assessed and comparisons made. A baseline study, for example, might assess
conditions in a specific neighborhood (e.g., poverty level or truancy) before the launch of a grantmaker-funded initiative aimed at improving those conditions.
Cluster evaluation – An evaluation that looks across a group of projects or grants to identify patterns, as well as factors that might contribute to variations in outcomes and results across the
sample.
Dashboard – An easy-to-read tool that allows board members and staff to review key information about the performance of the grantmaker and its grantees. Sometimes called a “balanced
scorecard,” the dashboard flags key data that board and staff decide they want to track over time. Emergent learning – Learning that happens in the course of an initiative or project, when goals
and outcomes are not easily defined. Using “emergent” or “developmental” evaluation methods, a grantmaker can generate feedback and learning as work unfolds. New learning, in turn, can be
used to refi ne or change strategies over time.
Formative evaluation – An evaluation that is carried out while a program is under way to provide timely, continuous feedback as work progresses. Sometimes called “real-time evaluation” or
“developmental evaluation.”
Indicator – A quantitative or qualitative variable that provides a simple and reliable means to measure results or to demonstrate changes connected to a specific intervention.
Inputs – The various components of a specific intervention, as measured in financial, human and material resources.
Knowledge management – The processes and strategies a grantmaker employs to create a culture of knowledge sharing among staff, grantees and colleague organizations, including everything
from databases and intranets to Web sites and grantee and staff convenings.Learning community – A group of grantmakers, grantees and/or other constituents who come
together over time to share evaluation results and other learning and to identify pathways to better results. Sometimes called a “community of learners.”
Logic model – A conceptual picture or “roadmap” of how a program or intervention is intended to work, with program activities and strategies linked to specific outcomes and desired results.
Organizational learning – The process of asking and answering questions that grantmakers and nonprofits need to understand to improve their performance and achieve better results.
Outcomes – The broader changes or benefits resulting from a program, as measured against its goals (e.g., an X percent reduction in emergency room visits). Compare with “outputs,” below.
Outputs – The direct products of a program, usually measured in terms of actual work that was done (e.g., meetings held, reports published). Compare with “outcomes,” above.
Participatory evaluation – A form of evaluation that engages a range of stakeholders in the process of designing the evaluation and tracking results, based on the goal of ensuring that the
evaluation is useful and relevant to all involved. Social return on investment (SROI) – A measure that sets out to capture the economic value of social benefits created by an initiative.
Summative evaluation – An evaluation that assesses the overall impact of a project after the fact, often for an external audience such as a grantmaker or group of grantmakers.
Theory of change – A systematic assessment of what needs to happen in order for a desired outcome to occur, including an organization’s hypothesis about how and why change happens, as
well as the potential role of an organization’s work in contributing to its vision of progress.
© Evaluation in Philanthropy, 2009, Grantmakers for Effective Organizations and the Council on Foundations