Debunking Common Myths in Evaluation

Evaluation is often seen as a complex and intimidating process, but at its core, it’s about understanding effectiveness, learning from our work and making better decisions. Over the years, I’ve encountered a number of misconceptions that prevent people from fully embracing evaluation in their programs and organizations. Today, I want to take a moment to bust some of these myths and help reframe how we think about evaluation.
Myth #1: “I’m Not an Evaluator, So I Can’t Do Evaluation”
Evaluation can be a daunting topic for many individuals and organizations. It can be seen as an aspirational activity that only occurs when we have sufficient funding or as a necessity that is prioritized when a funder requires us to evaluate. And while evaluations should be led by individuals with technical expertise, organizations can start to build their evaluation practice in house by practising curiosity.
Evaluation is about asking why and how things are happening. Why do you think this happened? Why is this program working? Why is it not? Why are people enrolling or not enrolling in our program? How do people experience our program? How can we make it better? These questions help us uncover key insights that can drive improvement, all by leveraging internal skill sets including adaptability, problem solving and likely a bit of project management.
Myth #2: “Evaluations Only Look at Outcomes”
Many people associate evaluation with tracking end results—such as how many participants secured employment after a training program. While outcomes are important, they are only one piece of the puzzle.
A more comprehensive approach considers multiple dimensions, including:
- Relevance: Is the intervention meaningful and responsive to the needs of those it serves?
- Satisfaction: How are participants experiencing the program?
- Effectiveness: Are we achieving what we set out to do?
- Efficiency: Are we using our resources wisely and minimizing unnecessary barriers?
- Value Add: What difference is the program making for individuals, families and communities?
- Sustainability: Can the program continue beyond its initial funding cycle?
A narrow focus on outcomes can limit our ability to understand and improve programs. Thinking more broadly allows us to capture the full story and make better decisions about what’s working and what needs adjustment.
Myth #3: “Evaluation Happens at the End”
One of my biggest frustrations in this field is when evaluation is treated as an afterthought—something to be done only when a funder requests a final report. Too often, organizations bring evaluators in at the last minute, expecting them to measure outcomes without having built the necessary framework for meaningful data collection.
Instead, evaluation should be embedded from the start. At Blueprint, we use the term “evidence generation” rather than “evaluation” because “evidence generation” shifts the mindset from merely assessing impact to actively building knowledge throughout the program life cycle. By treating evaluation as an ongoing process, we can refine our approaches, make real-time improvements and ensure that we are learning at every stage—not just at the end.
Myth #4: “Impact Evaluations Are Always the Best Fit”
There is a growing emphasis on impact evaluations—especially methods like randomized controlled trials—to prove effectiveness. While impact evaluation is the most credible approach for measuring a program’s impact, it should only be implemented once efforts have been made to strengthen design and delivery and ensure the program is operating as intended.
Impact evaluations require significant time and resources, so it’s crucial to assess your project needs, your working conditions, and whether an impact evaluation is appropriate. In my decade of experience in the field I have noticed an over-reliance on impact evaluations, which can lead to inflated or misleading outcomes and wasted resources. Rather than defaulting to complex and costly evaluation models, we should prioritize selecting the right evaluation approach based on the project’s stage and the specific questions we need to answer.
Final Thoughts
Evaluation doesn’t have to be intimidating or overly complicated. By shifting our perspective, seeing evaluation as an ongoing learning process rather than a burdensome requirement, we can make better decisions, create more effective programs and ultimately drive greater impact.
Whether you are a program manager, service provider or policy professional, you already have many of the skills needed to integrate evaluation into your work. Start small, stay curious and remember to ask “why” at every step. The more we embrace evaluation as a tool for learning and growth, the more powerful it becomes.
For further insights on skills development and evaluation, check out the Future Skills Centre’s latest webinar recording and this State of Skills Report.
This blog is based on a Peer Learning Group session hosted by Research Impact Canada and Future Skills Centre. It was refined using AI-assisted transcription and drafting tools. The final version has been reviewed for accuracy. To dive deeper into these insights, check out the recording of the full session.
The views, thoughts and opinions expressed here are the author’s own and do not necessarily reflect the viewpoint, official policy or position of the Future Skills Centre or any of its staff members or consortium partners.