The risks of risk-based approaches: predicting higher education quality with metrics

Flickr - Biking Nikon SFO

Flickr – Biking Nikon SFO

Like many policy areas in recent times, the quality assurance of higher education has faced calls for it to be more ‘risk based’ and make greater use of metrics to predict how Higher Education (HE) providers may perform in the future. The attraction of a risk-based approach is clear: HE providers perceived as low risk by the Quality Assurance Agency (QAA) are freed from the burden of regulation and allowed to flourish whilst it focuses on ensuring quality in those HE providers it perceives as high risk. But is this currently feasible? In this blog post, Alex Griffiths describes how new research is uncovering the risks of a risk-based approach.

A green paper published last month, claims that adopting such an approach will “safeguard quality and excellent student outcomes”, “ensure value for money for the public purse”, and “focus oversight where it is needed most”. But is this true? To achieve this the QAA must be able to make use of the available data to identify which HE providers deserve its attention. Comprehensive research by our group at King’s College London suggests this is not possible.

Although using metrics to predict quality failings and therefore prioritise oversight activity sounds straightforward, until now there has been no empirical research to definitively answer which indicators, if any, would allow the QAA to do so. To address this problem we paired the outcome of reviews of HE providers (including universities, colleges and alternative providers) conducted by the QAA with the latest data available prior to the start of the review, and used modern machine-learning techniques to determine which combination of indicators would have best predicted the outcomes. Our hypothesis was that if the QAA could use past metrics to predict which HE providers would subsequently be found to be unsatisfactory (high risk), then it is likely a risk-based approach could work in the future.

Applying the dataGraduates

For universities the available data included the results of the National Student Survey, the  Destinations of Leavers of Higher Education survey and the performance indicators compiled by the Higher Education Statistics Agency in addition to data concerning staffing, students, applications, research, finance and past review performances. The most accurate model based on this data predicted that universities accepting a greater proportion of mature applicants than in the previous year, overspending on research, and financing a greater proportion of its staff than in the previous year from principal sources were more likely to be deemed unsatisfactory. However, the model performed poorly as there was little differentiation in risk scores between universities – if all of the historic reviews had been prioritised in order of their risk scores it would have required over 90 per cent to have been undertaken before all unsatisfactory provision had been detected. Applying the model to new data also provided questionable results.

A similar story was true for the less data-rich further education colleges and alternative HE providers. In both cases a combination of financial indicators and past review performance combined to make the most accurate model but the same issues arose: there was little differentiation in the risk scores between the providers, nearly all would have had to have been reviewed in order to have detected all the unsatisfactory performance, and the models application to new data was far from convincing.

Pixabay - RSunset

Pixabay – RSunset

Too much risk

Therefore, even with the benefit of perfect hindsight, thousands of indicators considered in a variety of ways, and modern statistical techniques in effect trying every possible weighted combination of indicators, no model was able to reliably predict the outcome of a provider’s QAA review. Although it is not clear why performance indicators cannot predict the outcome of quality assurance reviews, it is clear that they cannot. Our research teaches us then that a purely data-driven, risk-based approach to predicting higher education quality is not feasible. Any attempt to introduce such an approach will result in high-quality providers being unfairly burdened with additional reviews—and stigmatised as being at high-risk of quality assurance failings—while poor-quality providers may go undetected for long periods of time.

One indicator however did turn out to have some potential use in prioritising quality assurance reviews: provider type. We found that a significantly greater proportion of alternative HE providers were found to be unsatisfactory when compared with either universities or further education colleges. Whilst this finding is of limited use in a ‘risk based’ system when students at alternative providers make up just 2% of the overall cohort, it raises questions concerning the Government’s desire to reduce the barriers to entry to the higher education market for new alternative providers.

Find out more about this research.

Alex Griffiths is a PhD Student in the School of Management and Business at King’s College London. Alex is undertaking ESRC-funded research into risk-based approaches to quality assurance in higher education and is working in collaboration with the Quality Assurance Agency. The work builds on Alex’s academic background in mathematics and risk analysis and his years of work at the Care Quality Commission developing risk assessment tools in health and social care.

Creative Commons License
This blog by Policy Institute at King’s is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Leave a Reply

Your email address will not be published. Required fields are marked *