Research has traditionally defined rigor as obtaining an unbiased estimate of impact, suggesting the need for experimental or quasi-experimental methods and objective, quantitative measures in order to obtain trustworthy results.
I’ve spent the past few months as a member of Colorado’s Equitable Evaluation Collaboratory, which aims to examine the role evaluation plays in supporting or inhibiting progress toward equity and identifying opportunities to integrate equitable evaluation principles into practice. In particular, I’ve reflected on how the research tradition has impacted evaluation’s working orthodoxies including the notion that “credible evidence comes from quantitative data and experimental research” and “evaluators are objective.”
On the surface, these statements don’t appear particularly problematic, but dig a little deeper and we begin to see how value judgments are an integral part of how we practice evaluation. The types of projects we take on, the questions we ask, the frameworks we use, the types of data we collect, and the ways we interpret results – are all deeply rooted in what we value. As an evaluator focused on use, I aim to make these practice decisions in partnership with my clients; however, suggesting that I, or any evaluator, does not play an active role in making these decisions discounts our inherent position of power.
Now that I’ve tuned into the orthodoxies, I see them everywhere, often dominating the conversation. In a meeting last week, a decision-maker was describing the path forward for making a controversial policy decision. He wanted to remove subjectivity and values from the conversation by developing guidelines rooted in “evidence-based practice” and turned to me to present the “facts.”
As a proponent of data-driven decision making, I value the role of evidence; however, there is a lot to unpack behind what we have declared – through traditional notions of rigor – “works” to improve health and social outcomes. Looking retrospectively at the evidence, and thinking prospectively about generating new knowledge, it’s time to ask ourselves some hard questions, including:
- What interventions do we choose to study? Who developed them? Why did they develop them?
- What have we (as a society) chosen not to investigate?
- What population have we “tested” our interventions on? Have we looked for potentially differential impacts?
- What outcomes do we examine? Who identified these impacts to be important?
- Who reported the outcomes? Whose perspective do we value?
- What time-period do we examine? Is that time-period meaningful to the target population?
- Do we look for potentially unintended consequences?
As we begin to unpack the notion of “what works” we begin to see the decision-points, the values and the inherent power and privilege in what it means to be an evaluator. It is time that we owned the notion that what we choose to study and how we choose to measure success are not objective, rather, they are inherently subjective. And importantly, our choices communicate values.
So how do we begin to embrace our role? As a step forward, I have started including a discussion of values, both mine and my clients, at the beginning of a project and clarifying how those values will influence the evaluation scope and process. Explicitly naming the importance of equity during the evaluative process has helped keep the goals of social change and social justice front and center. Naming values helps stakeholders acknowledge their power and provides a lens through which to make decisions.
Equitable evaluation is an expedition into the unknown, requiring a transformation in how we conceptualize our role as evaluator. Having taken my initial steps into the Upside Down, I look forward to the many unknowns.
In what way do you see values showing up in your evaluative work?