The G Blog #DeeperInquiry #Insights #RadDonors #RadicalFunding

Escaping the trap of ‘impact’ measurement

Toby Lowe, May 2025

It’s easy to identify where the impulse for “impact” measurement comes from – we all want to know the difference we’re making in the world. Understanding the difference we make helps with motivation at work, and enables us to reflect on how we do what we do.

And we know this impulse is particularly acute for those who are involved in social change work, but who are abstracted from the day to day work – people like senior managers and funders. When you’re in the day to day, you witness the difference you’re making. You see it in the conversations you have with the people you work with, and you feel it in the changes in the contexts you’re part of. You understand the change because you’re living that change.

But that’s not possible if you’re not in those contexts day to day. That’s when those who are abstracted from the work ask for impact measures – “objective” proof that they can use to satisfy themselves that their coordinating or funding contributions are also making a difference. And that’s how the natural impulse to understand our impact in the world becomes a trap which damages and undermines the processes and relationships which actually create impact. 

How does abstracted, “objective” impact measurement undermine the creation of impact? Let’s explore the evidence.

The impossibility of measuring “your” impact

Firstly, in complex systems it is impossible to rigorously identify “your” impact. To understand this, let us look at the research on the outcome of obesity.

Figure 1 Obesity systems map, from Butland, B., Jebb, S.A., Kopelman, P., McPherson, K.E., Thomas, S., Mardell, J., & Parry, V. (2007). Foresight. Tackling obesities: future choices. Project report

Here’s a systems map of the outcome of obesity, produced by the UK Government Office for Science. This research identified that there are 108 different factors which contribute to the outcome of obesity, and the map shows the relationships between those factors. On this diagram, you can see factors summarised into areas such as “Food production and supply”, “Early life experiences” “Education” and “Media”.

Let’s say that you’re one of the people operating in the bottom right corner of this system – you’re providing “healthcare and treatment options” to address obesity. Let’s say you’re delivering weight loss programmes in neighbourhoods. How would you distinguish the impact of your weight loss programme from the influence of all the other factors in this system?

Short answer – you can’t. Someone on your programme sees a film that changes their perspective on the meals they cook. Someone on your programme changes jobs, to a place with a canteen where they only serve healthy options. Someone is made redundant, so they can’t afford to buy healthy food. What was the impact of your programme in these situations? 

In complex systems like this it is impossible (not difficult, impossible) to produce a reliable counterfactual (to say what would have happened in the absence of your intervention) because complex systems like this produce outcomes through relationships which are unpredictable, dynamic and emergent. This means that you run the ‘same’ complex system twice and it will produce two entirely different results. (The point is, it’s never the same system, because minute changes that may well be invisible to you will lead to disproportionate changes in the results the system creates).

Without a reliable counterfactual, it is impossible (with any degree of robustness) to say “this was my contribution” to the results the system creates. When you measure impact in a complex system, you’re not measuring “your” impact, you’re measuring the result of the interactions of hundreds of variables, some of which will be completely invisible to you. (And because significant variables are invisible to you, they can’t be controlled – so methods such as Randomised Control Trials aren’t reliable methods in complex systems).

Performance management corrupts data

The second problem specifically concerns the use of impact data for performance management – when funders or senior leaders say to those undertaking the work of social change – “you can only have a job/have these resources if you successfully ‘demonstrate your impact’ to me.” 

Using impact data for performance management in this way corrupts the data that is produced. We have known this since Donald Campbell published his famous piece “Assessing the impact of planned social change” in the journal Evaluation and Program Planning in 1979. In it, he formulated what has come to be known as Campbell’s Law:

“The more any quantitative social indicator is used for social decision making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”

It is only through an astonishing feat of intentional blindness that we have ignored the evidence surrounding Campbell’s Law for this long. Every use of Social Impact Bonds, every instance of Payment by Results, every time someone uses Results Based Management – all are subject to the corruption of data described by Campbell’s Law. So many studies since 1979 have reinforced the truth of Campbell’s Law that it is helpful to have a range of them summarised by a meta study in 2018. Franco-Santos and Otley published a piece called “Reviewing and Theorizing the Unintended Consequences of Performance Management Systems” in the International Journal of Management Reviews. This was a systematic review of the effects of target-based performance management systems. It found that when people use targets to manage the performance of workers, over 80% of studies find evidence of gaming [the deliberate manipulation of data to make it look good] and 74% find evidence of people deliberately lying.

This is the distortion and corruption that Campbell’s Law is talking about. If you ask people to “demonstrate their impact” in return for their jobs, or the resources to do their work, then, my word, are you going to get good looking data about impact.  This was exactly the case for the Big Lottery Fund in England who undertook in 2014/5 a (sadly unpublished) review of 10 Years of their large scale grantmaking programme Reaching Communities. The Reaching Communities Fund was purposefully framed around delivering outcomes – applicants were asked to identify the outcomes (the impact) that they would create as a key part of the application process, and successful applicants were required to monitor their progress against the delivery of this impact.

The review found that they had given over £1 billion of grants in the first 10 years of the programme. And not a single one of these grants was unsuccessful in delivering its expected outcomes. What a truly astonishing level of prescience from all of those applicants! Every single one of those applicants was able to predict and mitigate all of the unexpected twists and turns of life. A global financial crash occurred, pushing millions in England into poverty – and still everyone delivered expected impact. Public sector austerity devastated voluntary organisations funding arrangements, and still they all delivered expected impact. 

This is the kind of data corruption we get when we ask organisations to demonstrate their impact. And – to be clear – it’s not the fault of those reporting this impact data. They’re only playing the ridiculous game that has been created by others. This kind of gaming is entirely the responsibility of those who choose to ask others to “demonstrate their impact” in return for funding or job security.

You have a choice

Because it is a choice to fund or manage in this way. If you do play one of the roles that is abstracted from the work of social change, and you want to manage or fund in a way which genuinely supports the processes and relationships which create impact, you can choose to do it differently. 

An alternative approach is called Human Learning Systems. It begins by asking the question: “how is impact in the world created?” and builds an approach to managing social change activity from the answer to that question.

Human Learning Systems embraces the real-world complexity of impact. It recognises that genuine impact in people’s lives is created by all the factors that exist in and around a person’s life – that impact is something which is co-created with a person, not delivered to them by a project or a programme. This means that creating impact must start with an exploration: what in your life can create the impact that is meaningful to you? What purpose matters to you? What are the unique set of actors and factors that make up the complex system in your life which would enable you to achieve that purpose?

Let’s say that a person is in chronic pain, and wants to change that. The job of someone who wants to help deal with that pain is to build a shared understanding of the unique set of actors and factors that is creating the experience of pain in their life, and then co–design and enact experiments/explorations which address those unique factors. This is an approach in which every act of change is an act of action research which asks the question: “What works, for you, in your context?”

This process is described as a Learning Cycle, and it offers a process map of work for anyone involved in change activity.

Figure 2, A Learning Cycle

In this respect, we see how the impulse to understand impact can be usefully harnessed. The question “what impact is my work having?” is useful to anyone when framed within a Learning Cycle. In fact, asking this question everyday is a fundamental aspect of the Human Learning Systems approach, because every single act is an act of action research.

But the crucial point is that those doing the work are asking this question for themselves – they’re not seeking to gather impact data to demonstrate their effectiveness to others, they’re seeking to understand the impact of their actions because it is a necessary part of their practice. 

This approach means that the nature of impact measurement changes. Impact measures become bespoke to the specific context in which the Learning Cycle takes place. We measure what matters to this person. And if that’s different to what matters to the next person, then it’s our duty to them to measure what matters to them. That’s how we escape the impact trap, and how we make good on the original promise of impact measurement – to focus on making things better from the perspective of citizens, not from the perspective of those seeking to help. Every time we use a standardised impact measure, we are not measuring what matters to citizens, we’re measuring what we think is important.

The crucial point here is to break the link between data and performance management. The evidence is 100% clear – if you use your data for performance management purposes, it is likely to be corrupted, and will be useless for genuinely improving performance. By taking a Human Learning Systems approach we can enable data to do its job – to help people to get better at creating genuine impact.

How does this help funders and leaders?

Understanding how genuine impact is created in the world enables senior leaders and funders to organise “impact” work in a way which supports and enables the processes and relationships which create it. This is the choice that funders/senior leaders get to make – do you want to fund/manage in a way which supports impact?

If you do, then an approach that works (here’s the most current evidence) is Learning as a Management Strategy: organising for continuous, collaborative learning, rather than for control or “delivery” of impact. 

If you’re a funder, this means funding for continuous, collaborative learning. If, as a funder, you’re allocating scarce resources, it means allocating resources to those who are able to learn collaboratively in complex systems, and who share a purpose with you. This means, for example, funding those who can undertake Learning Cycles rigorously and effectively. And it means supporting capacity building so that organisations understand and can practise this kind of continuous learning approach.

An example of this approach can be seen by the Systems Innovation and Experimentation Fund, created by Climate KIC and the Swedish International Development Agency. They funded explicitly using a Learning Cycle framework. 

They did this by asking applicants to identify the systems they wanted to experiment with – what was the purpose of those systems, who the actors were that would enable the achievement of that purpose, what the factors were that achieved/frustrated the purpose. And crucially, they asked what shared knowledge was created when they brought together those actors to reflect on those factors. In short, their application process was testing whether the applicants could be effective system stewards.

They selected a longlist from those applicants, and gave them development grants and support to help them develop their action research proposals. This was resource and support to enable further system convening – what do experiments/explorations do those actors want to create in order to create impact in those systems? They then funded those actors to “deliver” those as pieces of action research, not as programmes to be implemented.

Concluding thoughts

If you’re a funder or leader who wants to support the processes and relationships which genuinely create impact, you can do two simple things:

1. Stop asking teams/organisations/programmes to “demonstrate their impact”

2. Fund to enable continuous, collaborative experimentation and learning

If you’re curious about what making this kind of switch might look like for your organisation, there’s lots more information, including over 80 case studies, on the Human Learning Systems website.  You can also join the Human Learning Systems LinkedIn group, or drop me a message there