A fish labeled 'Accurate Data' swimming in a sea labeled 'Context'.

Outcomes

Tip: When evaluating a stable, well-developed project/program, review rather than develop proposed project/program outcomes and objectives. Determining project/program design, objectives, and outcomes is not the responsibility of the evaluator.

Rationale: Evaluators who become involved in the development, design, and rationale of a proposed project/program run the risk of compromising their external status and credibility. It is not the evaluator's project/program. The evaluator is the reviewer, validator, and critical friend who collects data and provides evidence for the purpose of making judgments about the project/program.

Tip: When evaluating a project/program that is its early stage and is in flux, consider using more developmental evaluation principles.

Rationale: “In developmental evaluation the evaluator is part of the team that is working to conceptualize, design and test new approaches. The evaluator's primary role is to bring evaluative thinking into the process of development and intentional change -- to introduce 'reality testing' into the process. The evaluation helps to discern which directions hold promise and which ought to be abandoned and suggests what new experiments should be tried.”1

Tip: Work with project/program staff to develop a logic model that starts with the proposed outcomes and maps those outcomes to proposed activities.

Rationale: Traditionally, logic models start with inputs and activities and move to the outcomes. Millar et al.,2 suggested that this tends to cause the program staff to focus on what is being done rather than what needs to be done. McCawley3 provided program staff and evaluators with a series of questions that can be used to develop an “outcomes first” logic:
  1. What is the current situation that we intend to impact?
  2. What will it look like when we achieve the desired situation or outcome?
  3. What behaviors need to change for that outcome to be achieved?
  4. What knowledge or skills do people need before the behavior will change?
  5. What activities need to be performed to cause the necessary learning?
  6. What resources will be required to achieve the desired outcome?

Tip: Work with project/program staff to have them provide a rationale as to why their proposed activities should lead to their proposed outcomes. Explore possible alternative hypotheses.

Rationale: Without an explicit research or logic-based rationale as to why the activities will lead to the proposed outcomes, there is no reason to assume the proposed outcomes can be achieved with those activities and little reason to do the evaluation or the program. The principal investigator should be able to draw on research to provide guidance as to which strategies should be more important and/or more impactful and why.

Tip: Review the proposed metrics and measures to ensure that not only do they document changes in areas such as numbers of underrepresented STEM students graduating or numbers of women going on to graduate school in the sciences, they also document any individual or institutional changes that could have contributed to changes in numbers.

Rationale: For projects/programs to increase the diversity of the STEM workforce, documentation of changes in numbers or percent of students or others attaining goals is necessary but not sufficient. A major role of evaluations of these types of projects/programs is to explore indicators of individual and/or institutional changes that may be behind increases in numbers and/or percentages. These indicators will vary depending on the project/program but they will most likely fall into one or more of three categories: engagement, capacity, and continuity.4 While the evaluation needs to evaluate the end products, it also needs to document the process.

Tip: Operationally define the outcomes prior to conducting the evaluation and check to see if participants and project/program staff agree with those definitions.

Rationale: Different definitions of outcomes, such as career success, can lead to different results and conclusions. When career success was defined by income, graduates of Historically Black Colleges and Universities (HBCUs) were not found to be more successful than Black graduates from predominately White institutions. When career success was defined by the Duncan Socioeconomic Index, which gives credit not only for wages, but for working in high-prestige professions, HBCUs graduates were found to be more successful.5

Tip: Use interviews, focus groups, and/or open-ended questions to document possible positive and negative unintended project/program outcomes.

Rationale: It has long been recommended that evaluators document unintended as well as intended conditions and outcomes.6 The unexpected can have a strong influence on evaluation results. For example, increased anxiety and absenteeism were unintended outcomes for students with disabilities after high stakes graduation tests were introduced.7

1 Gamble, J. A. A. (2012). A developmental evaluation primer. Montreal: The J.W. McConnell Family Foundation.p18.
2 Millar, A., Simeone, R. S. & Carnevale,. J. T. (2001). Logic models: a systems tool for performance management. Evaluation and Program Planning, 24, 73-81.
3 McCawley, P. (ND). The Logic model for program planning and evaluation. University of Idaho.
4 Jolly, Eric, Campbell, Patricia B. & Perlman, Lesley K. (2004). Engagement, capacity and continuity: A trilogy for student success. Fairfield, CT: GE Foundation.
5 Price, G. N., Spriggs, W., & Swinton, O. H. (2011). The relative returns to graduating from a Historically Black College/University: Propensity score matching estimates from the national survey of Black Americans. The Review of Black Political Economy, 38(2): 103-130. doi: 10.1007/s12114-011-9088-0
6 Stake, R. (1967). The countenance of educational evaluation. Teachers College Record, 68, 523-540.
7 Nelson, J. R. (2006). High stakes graduation exams: The intended and unintended consequences of Minnesota's Basic Standards Tests for students with disabilities (Synthesis Report 62). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.