- Was the evaluation design appropriate to the given situation and intended users' needs?
- Were the evaluation findings put to use as intended?
- Did the intended users find evaluation findings and processes useful and insightful (and transformative)? In other words, did any learning from evaluation findings and processes happen?
- Did stakeholders' program theory improve?
- Was the facilitation of evaluation done in an ethical and responsive manner?
At the opening plenary at AEA 2010, Eleanor Chelimsky talked about different evaluation quality arguments in three types of evaluation: evaluation for accountability, improvement, and knowledge-building. For example, in the case of accountability-driven evaluation (such as in Request For Proposals), we often see evaluation quality criteria described as "rigorous"(often in RFPs, rigorousness means randomized control trial or quasi-experiment design) and "objective" (i.e., an external evaluator conducting evaluation). In improvement-oriented evaluation, these "rigorousness or objectiveness" may not necessarily be the quality criterion used.
What would then happen, when evaluation is serving competing purposes and interest groups? As an evaluator, how would we balance and negotiate quality criteria to serve different interest groups with different definitions of evaluation quality (as well as our own standard of practice as a professional evaluator)? Is it negotiable at all? Can we get different interest groups to understand with each other?