top of page

Making Evaluations Actionable

Updated: Oct 9, 2021

In the process of designing and implementing evaluation plans, the team at ACED is dedicated to working together to produce program or project improvements through direct action. When data are reported and filed away, it becomes difficult to create a culture invested in evaluations. When data are engaged with regularly and there is a clear link to important decisions, the culture can embrace the purpose of evaluation and the project generates more buy-in across all stakeholder groups. While the goals of the project or program are important, the goals of the evaluation plan also include the engagement of all of those involved in the project, and an actionable approach helps to make this engagement happen.

Round and Round We Go!

Continuous improvement through actions inspired by data is at the heart of what we do. However, taking action can be scary sometimes – many practitioners have seen “extinction by instinct” events where rash actions resulted in disasters. Unfortunately, in many cases, we over-correct from this fear into “paralysis by analysis” scenarios where no data seems adequate to use for decision-making. Some of this stress can be reduced by using a series of closed-loop cycles – one example is those framed by the Plan, Do, Check, Act (PDCA) framework.


The PDCA model asks practitioners to connect their conceptions of what should happen as the result of a particular project or event with the information collected on success and the overall result. This cycle, widely embraced in many business contexts, can help guide the design of evaluations and educational projects.

As a framework for action, the history of PDCA provides insight into its usefulness. There are many iterations of the PDCA concept, including Shewhart’s original Plan, Do, Study, Act model which recognizes the nuance of the word “check” in the punitive context. Before the debate between “Check” and “Study,” Shewhart, a physicist, envisioned the human nature of discovery as a circle rather than a straight line. Conceptualizing the potential for continuous improvement as a cyclical model provides a visual connection to the lived experience of many practitioners who iterate on techniques such as pedagogies many times in the interest of the best possible technique. Over time, scholars and practitioners have iterated on the design of the cycle framework to create the familiar look of the cycles we see today.

It is interesting to consider the traditional objectivity associated with the origins of the PDSA cycle, based on statistical control of manufacturing efficiency. This structure appears to be at odds with the relativity inherent in evaluation activities, where data interpretations and actions. The importance of the cycle is that there is no finite reality, but there is a series of levels of success in any project implementation. In this way, the cycle is never “complete” when an outcome or objective is “achieved,” but there is always a chance to revisit the strategy in the interest of achieving success. Some evaluators might describe this process as “starting with the end in mind,” but the most important mental frame to approach evaluation is that there is no singular end to the project, but many opportunities for improvement.

11 views0 comments

Recent Posts

See All


bottom of page