Reflections from the RCT practitioner workshop

By Eleri Burnhill, King’s College London |

The need to provide evidence of the impact that access interventions are having on the outcomes of disadvantaged groups is increasingly pressing. With the publication of the proposed standards of evaluation practice in the summer of 2017, commissioned by the Department for Education (DfE) and the Office for Fair Access (OFFA)[i], came a definitive shift in expectations which required the sector to confidently prove the impact of their work.

The Level 3 standard of evaluation specifies the need to confidently establish that interventions, specifically multi activity programmes and summer schools, are having a causal impact on participants’ outcomes, through experimental and quasi-experimental designs. However, despite OFFA equipping practitioners with a toolkit of possible methodologies, it appears that these methodologies aren’t being utilised by practitioners. This doesn’t come down to the fact that practitioners don’t want to utilise such methodologies, but simply that they’re not sure how to use them.

To address this issue, we ran a workshop aimed at WP and evaluation practitioners who wanted to design their own causal evaluations. Our Associate Director for What Works, Susannah Hume, and Dr Michael Sanders (Chief Scientist at the Behavioural Insights Team) presented the basic principles of conducting RCTs through the nine steps of Test, Learn, Adapt (TLA)[ii]. Dr Sonia Ilie from Cambridge presented the text trials as part of the NEACO evaluation, and Eliza Kozman presented her doctoral research, using video testimonials to explore white working class boys’ conceptions of academic study (Eliza wrote about her research here).

Here are some of my reflections from the day.

RCTs shouldn’t necessarily be ‘left to the experts’

The day provided practitioners with the opportunity to work with expert advisors to develop an RCT evaluation strategy for their own programmes or initiatives, using the TLA model presented on the day. One thing that stood out to me during the workshop was the excitement in the room; with colleagues from across the sector discussing their respective outreach interventions and how they could incorporate RCTs within their existing evaluations of these interventions. I look forward to seeing how the projects discussed on the day progress, and to continuing the conversation at #WPRCT.

You won’t always get the results you hope for, and that’s OK

Importantly, the workshop provided a forum for practitioners to discuss their concerns with conducting RCTs. Surprisingly, the main concern expressed by colleagues (above ethical concerns) was the prospect of null results, or suggesting that their intervention was not having the desired effect on student outcomes. Practitioners expressed concerns that their funding might be cut if they did not ‘prove’ that their interventions were delivering as anticipated. This points to a wider cultural change needed in the sector, to give practitioners the space to really test, learn and adapt their interventions.

In practice, null results rarely result in a programme being wound up immediately; instead they provide valuable insights for refining and strengthening the intervention, especially where the RCT has been part of a mixed-methods approach that yields richer data on how the intervention has been experienced.

Clear next steps are needed for WP evaluation

My main reflection from the workshop was that RCTs aren’t the be-all and end-all of evaluation. Yes, well-designed causal evaluations tell us whether interventions work in achieving their intended outcomes, but they crucially don’t tell us how or why they work. Likewise qualitative subject-based approaches tell us about the experience of receiving an intervention, but not whether it has genuinely changed outcomes.

It is therefore nonsensical to have an ‘us’ and ‘them’ mindset between those who promote experimental designs and those who lean towards social realist approaches. The logical next step for the sector is to merge these compatible methodologies: to understanding whether outreach activities are working, for whom, and in which context. By combining realist and ‘what works’ approaches, and by focusing on both theories of change and measurable outcomes, we can develop a new gold standard for evaluating access initiatives – realist causal evaluation.

Stay tuned to the blog for further posts about how realist methodologies and causal methods can be powerfully combined to help us identify what matters and know what works for widening participation.

Click here to join our mailing list.
Follow us on Twitter: @KCLWhatWorks


[i] Crawford, C., Dytham, S., & Naylor, R. (2017) The Evaluation of the Impact of Outreach Proposed Standards of Evaluation Practice and Associated Guidance [online] Available at: [Accessed 24 Apr. 2018].

[ii] Haynes, L., Goldacre, B., & Torgerson, D. (2012). Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials| Cabinet Office.

1 Trackback / Pingback

  1. Realist causal evaluation – securing the utopian vision  – Behavioural Insights in Higher Education

Leave a Reply

Your email address will not be published.