- Posted on 7 Apr 2026
- 6-minutes read
Every organisation begins with a purpose. A government initiative is launched to improve community wellbeing. A not-for-profit forms to support vulnerable people or champion a cause. A program is put in place to solve a problem that genuinely matters.
Yet over time, something subtle can happen.
Teams get busy. Funding priorities shift. Reporting requirements evolve. Staff change. New initiatives layer on top of old ones and slowly, almost invisibly, organisations can find themselves working hard to deliver activities well, while at the same time losing grip on the outcomes they originally set out to achieve.
This phenomenon is sometimes called purpose drift and it’s one of the biggest challenges facing mission-driven organisations today. The question isn’t whether you’re working hard, of course you are. The question is whether the work being delivered is achieving the impact it was designed for.
This is where outcomes-focused evaluation has a lot to offer.
The hidden gap between activity and impact
For decades, accountability frameworks in government and social sectors have focused heavily on outputs:
- How many occasions of service did you deliver?
- How many times was your resource downloaded or viewed?
These measures matter. They show that work is happening, but they don’t tell us whether anything meaningful has changed.
- Did those services improve anyone’s wellbeing?
- Did those workers learn anything they didn’t already know and did that translate into changes in practice?
- Did that intervention solve the problem it was designed to address?
- Did the organisation move closer to its mission?
The shift towards outcomes-focused evaluation reflects a deeper change in how organisations think about success. Instead of asking only, “What did we do?” or even “How well did we do it?”, outcomes-focused evaluation pushes us to ask the next question: “Did our actions actually make a difference?”
For leaders and practitioners, this shift requires more than new reporting templates. It requires a different way of thinking.
Evaluative thinking: A habit, not a report
Evaluative thinking is often misunderstood as something that happens at the end of a project: an evaluation report produced once funding has been spent. In reality, evaluative thinking is a continuous discipline. It is the habit of asking critical questions throughout the life of a program:
- What need are we really addressing here?
- What evidence supports this approach?
- What assumptions are we making and how could we test them?
- Are we seeing the early signs of progress that show us things are on the right track?
- If not, what needs to change?
Organisations that embed evaluative thinking tend to share several characteristics:
- They clarify purpose from the start.
Teams invest time defining what success truly looks like before a program begins. - They build reflection into delivery.
Structured pauses allow teams to assess progress, learn and adapt. - They connect decisions to evidence.
Data and feedback are not collected for reporting alone — they shape how programs evolve. - They create psychological safety for learning. Honest reflection is encouraged, not avoided.
Clarity at the beginning changes everything
One of the most powerful moments to apply evaluative thinking is before a program even begins. At this stage, organisations have an opportunity to ask a simple but transformative question: What exactly do we believe will change and why?
This is where tools like a theory of change become invaluable.
Your theory of change articulates “if… then…” thinking about how change can come about. Your theory of action then sits beneath this, explaining your mental model of how your actions are going to initiate, enable, or otherwise energise this change process.
Far from bureaucratic paperwork, this clarity becomes a strategic asset. It ensures programs are built on intentional design rather than inherited habits. When teams share this clarity from the start, evaluation becomes far easier — and far more meaningful.
Starting with this kind of clarity is an enormous asset. The next challenge, of course, is maintaining that clarity over time.
Why this matters more than ever
Today’s policy and social landscape is complex. Community needs are evolving quickly, and public trust in institutions cannot be taken for granted.
In this environment, organisations that succeed are those that can:
- clearly articulate their purpose
- rigorously assess their impact
- adapt using evidence and learning.
Evaluative thinking is not just a technical skill. It is a leadership mindset — one that keeps organisations aligned with what matters most.
Strengthen your organisation’s evaluation capability
At the UTS Institute for Public Policy and Governance (IPPG), these principles underpin the Outcomes-Focused Program Evaluation course, helping organisations move beyond measuring activity to understanding real impact and embedding evaluative thinking within everyday practice, not just within formal evaluation exercises.
If your organisation is ready to move beyond outputs and build stronger outcomes-focused evaluation practice, explore our short course, which is designed for practitioners across government, not-for-profit and industry sectors.
About Duncan Rintoul
Duncan Rintoul is an Industry Fellow at UTS IPPG, specialising in evaluation training and capacity building. Through more than 25 years of consulting experience across sectors including education, justice, human services, health and urban affairs, he has worked extensively with government agencies and NGOs to build their evidence base and use it for good.
At IPPG, Duncan delivers specialist training in evaluation, equipping professionals with the skills to design, implement and utilise evaluations for evidence-based decision-making. He also teaches executive education programs for the Australian Evaluation Society (AES) and the Australian Market and Social Research Society.
From 2015 to 2019, Duncan led evaluation capacity-building initiatives within the NSW Department of Education, helping to embed a culture of learning and continuous improvement. He is also an external Director of Mission Australia, where he chairs the Board’s Impact Committee.
Written by
Duncan Rintoul
This article was developed by the Institute for Public Policy and Governance at the University of Technology Sydney, which provides evidence-based advisory services, research and professional development in social planning and community development.
Available Courses
Program evaluation is a valuable tool for strengthening the quality of programs and improving outcomes. Evaluation provides an opportunity for critical, strategic thinking through collecting and analysing information about a program's activities, characteristics and outcomes.
This service review course will guide you through the essentials needed to design and undertake a review of a program, operation or service using a practical, results-based framework.
Do you work in community development and social planning and are looking to make a real impact in communities?
