Realist causal evaluation – securing the utopian vision 

Eleri Burnhill, King’s College London |

At the Brilliant Club Conference this year, Anne-Marie Canning, our Director of Social Mobility and Student Success, delivered the morning’s keynote speech. The question she was asked to address was – ‘where will WP be in 10 years’ time?’ Anne-Marie presented two possibilities: a utopian vision and a dystopian vision of the sector in 2028.

The dystopian vision presented by Anne-Marie is something that often worries me – a sector where budgets and funding are cut because we are, in 10 years’ time, still not consistently basing our work on robust evidence of what works in access and student success. The utopian vision was a widening participation sector that innovates, taking full advantage of the technologies available to us in 2028 and which has a firm understanding of what works, with the Evidence and Impact Exchange in full swing.

In my previous blog post, back in May, I proposed that the gold standard for evaluating access initiatives should be realist causal evaluation. However, when this suggestion has been made in other sectors, it has been met with some criticism. Bonell et al [i] and others across the sector argue that causal evaluation methods, such as RCTs, cannot establish ‘social causation’ as they fail to identify the social and contextual mechanisms on which successful outcomes are dependent. This is based on a fairly narrow and inflexible notion of what RCTs are and how they can be designed, and also sells short the realist approach and the value it can bring to other methodologies.

The What Works team strives to find practical evaluation approaches that enable practitioners and evaluators to understand the impact of their work, in a world and a sector where the demand for evidence is ever increasing. The Office for Students recently imposed additional registration requirements on Oxford and Cambridge to evaluate the impact of financial support for students [ii] [iii]. Signs are that the OfS will continue to push for better evaluation through its regulatory powers, so universities should be paying attention.

Soon we’ll be rolling out our new Monitoring and Evaluation Framework, which covers all widening participation, student success and academic support initiatives undertaken by our division. Despite concerns about the incompatibility of realist and causal approaches, our framework aims to select the strongest elements from both into a balanced, ambitious approach to investigating not only what works, but how, when and for whom. We incorporate the development of programme theories at a micro level, which help us to identify ‘small steps’ [iv] changes, and more causal macro level approaches such as RCTs and quasi-experimental methods. Unlike Pawson and Tilley, we are not ‘panacea phobic’ [v] given our strong focus on seeking solutions to the inequalities prevalent in our education system, but neither do we believe that a What Works approach is or should be blind to the contextual factors that influence whether or not an intervention is successful.

Encouraging our colleagues to develop Theories of Change[vi] has enabled them to take a step back, and to think about the specific problems they are working to address, or which behaviours they are trying to change, and how the interventions they have developed (or are developing) contribute towards those. It also enables practitioners to identify what needs to be in place to generate our desired outcome: identifying the mechanism within the CMO (context + mechanism = outcome) model [vii]. Incorporating this level of detail into our evaluative approach allows us to evaluate the social and context-specific mechanisms of change on a correlational basis. However, we must also be able to prove whether our interventions are causing the changes we forecast in our programme theory models; highlighting the importance of a combined approach.

Considering the risk of Anne-Marie’s dystopian 2028, we must progress further and faster as a sector to evidence what works, not only to secure our funding but, more importantly, to ensure that we are only engaging students in programmes that we know are effective in meeting our objectives, however micro or macro they may be. To do this we must use the right form of evaluation for the questions we are seeking to answer, and be responsive to the needs of policy-makers. Pragmatic evaluators need to combine valuable elements of different (and some might argue incompatible) methodological approaches, to robustly evaluate our work as well as provide evidence to support the Widening Participation, Student Success and What Works agendas.

Click here to join our mailing list.
Follow us on Twitter: @KCLWhatWorks


[i] Bonell, C., Fletcher, A., Morton, M., Lorenc, T., & Moore, L. (2012). Realist randomised controlled trials: a new approach to evaluating complex public health interventions. Social science & medicine75(12), 2299-2306.

[ii] OfS (2018) University of Oxford – Specific ongoing conditions of registration [online] Available at:

[iii] OfS (2018) University of Cambridge – Specific ongoing conditions of registration [online] Available at:

[iv] Harrison, N., & Waller, R. (2017). Evaluating outreach activities: overcoming challenges through a realist ‘small steps’ approach. Perspectives: Policy and Practice in Higher Education, 21(2-3), 81-87.

[v] Pawson, R. & Tilley, N. (2004). Realist Evaluation [online] Available at:

[vi] Rogers, P (2014). Theory of Change: Methodological Briefs – Impact Evaluation No. 2 [online] Available at:

[vii] Pawson, R. and Tilley, N. (1997). Realistic Evaluation. London: Sage.

Be the first to comment

Leave a Reply

Your email address will not be published.