One of the central questions of research policy is how to organise and carry out research in order to maximise the benefits that derive from it. This question prompts heated debate. Should we prioritise basic research over applied research or vice versa? Which disciplines are most impactful, and so deserving of most funding? Are there particular disciplines that should be favoured in order to align with national priorities for industrial strategy, or societal challenges? Important new insights on these questions are being revealed from the new dataset that is available following the Research Excellence Framework (REF); nearly 7000 case studies of impact from research are now available for mining and analysis. While the dataset has some limitations, the case studies still present an unprecedented opportunity to start to answer these questions. Continue reading
In the UK, research outputs from universities are assessed every five years to determine future funding allocations from government. In 2014, for the first time, the Research Excellence Framework (REF) included an assessment of research impact. This component was worth 20 per cent of the score awarded to each institution. The assessment was performed by panels of academics and research users.
The importance of research user engagement throughout the process of REF 2014 cannot be overstated. Research users, or those benefitting from publicly funded university research, played key roles throughout REF 2014 in several ways. Their evidence was needed to substantiate academics’ claims about the wider impact of university research – conveyed through impact case studies and strategies. Secondly, representatives from beneficiary organisations, like the British Library, the Overseas Development Institute, the BBC, Royal Museums Greenwich, Oxfam, BT, BAE Systems and the Bank of England served on the panels that assessed the impact of university research. So their role in engaging with REF 2014 has been vital to its success and important to the future of the process. Continue reading
For the first time, Higher Education Institutions (HEIs) were required to submit impact case studies as part of the 2014 Research Excellence Framework (REF). In total, 6679 non-redacted case studies were submitted, and today, we publish a report of the results of our text mining analysis of these data.
The case studies are now available to read online in a searchable database developed by Digital Science, providing a rich resource that has enabled us to demonstrate that UK research has thousands of different applications worldwide. The analysis of the case studies, led by the Policy Institute and department of digital humanities at King’s College London, used text mining techniques leading to the identification of 60 impact topics or areas where research influences society, such as medical ethics, climate change, clinical guidance and women, gender and minorities. Automated text mining was also supplemented with ‘deep mines’, where more than 1000 case studies were read to provide a deeper picture of the data – looking at specific questions such as ‘what is the impact and value of research on clinical practice and health gain?’ and ‘what has been the impact of research on BRIC countries?’.
The results of this analysis are fascinating and also discussed in Research Fortnight and on the HEFCE blog. The take home message is that the benefits from research are multi-impactful. For example, the case studies submitted for the unit of assessment on psychology, psychiatry and neuroscience showed that these disciplines made a contribution to 49 of the 60 impact topics. These included obvious applications such as in mental health, but also in impact topics including transport, schools and education and crime and justice. We also discovered that the case studies contained more than 3700 individual pathways to impact presenting a real challenge to anyone interested in producing impact metrics.
Many of the case studies were able to provide a clear illustration of the contribution that universities make to society, in a way that has not been revealed before. For example, one case study reported on research showing that the painkiller co-proxamol was the most common drug used for suicides in the UK. This finding led to its withdrawal, and has been estimated to have led to approximately 600 fewer deaths by 2012 in the UK alone.
Our analysis has also shown that impact of UK universities is truly global with the Details of the Impact sections of the case studies mentioning every country in the world – with the US being the most frequently mentioned, followed by Australia, Canada and Germany.
The case studies provide a rich resource demonstrating the breadth and depth of research impact. They can also help us to change perceptions. For example, the largest impact topic was informing government policy, which was associated with 1233 case studies. The word ‘policy’ was mentioned in 3206 case studies. This is reassuring, given the pre-REF scepticism about whether case studies could capture and articulate impacts on public policy. It was suspected that researchers mostly influenced policy through personal contacts and under-the-radar advisory channels, rather than through specific research that could be described in a case study.
Our analysis is really just the start. There are limitations to using these case studies as research material, such as the universally positive sentiment in their language and the fact that researchers could carefully select which case studies to use and the number of identical or near-identical submissions. However, this rich resource provides numerous avenues for future research – to enable us to really dig deeper into the global impact of UK University research.
Investing in R&D is seen as key to contributing to a healthier population, better environment and greater socio-economic prosperity. The impacts of these investments need to be measured and assessed to inform policy requirements, and policymakers in governments around the world are seeking such evidence. However, how do we know where to build capacity to be able to provide this evidence especially given the methodological challenges involved, such as long timelines, cause and effect attribution and getting the right data? How do we deal with the unravelling dynamic and serendipitous nature of R&D itself?
Five years ago, those interested in learning about the assessment of the impact of R&D programmes could find limited scientific articles in PubMed and institutional reports on Google, but it was difficult to locate international experts and learn from their experiences.
There were great champions pushing forward important developments. Led by visionaries such as Martin Buxton and Stephen Hanney from the HERG Group at Brunel University, who developed the Payback model to evaluate the returns of R&D, and Jonathan Grant and Steven Wooding from King’s College London and RAND Europe who developed and implemented path breaking studies, the UK was an oasis in the international desert. Canada was another oasis, led by passionate advocates such as Cy Frank who chaired a panel of the Canadian Academy of Health Sciences that proposed a ready-to-use framework and indicators that was instrumental in promoting research impact assessment (RIA) best practices in Canada and abroad. Cy was a great man who sadly passed away two weeks ago and will be missed by many.
More needed to be done and the real need for mutual learning (from the doers of RIAs and the policymakers who use the results to inform policies) was the impetus for creating a new oasis in 2013 – the International School on Research Impact Assessment (ISRIA).
ISRIA’s mission is to advance knowledge, to build global capacity across all fields of science and to promote an international community of practice in RIA. The school has a number of principles and values, including an agnostic approach in not advocating for one approach over another, and an open and accessible platform for the community to access tools and learning resources that are oriented to deliver social value and provide practical, feasible and cost-effective solutions. The School is a gathering place for people to share a common language of research impact, share emerging and best practice, learn how to use the tools of the trade and move the science of science forward in a practical sense. It is run over five days and focuses on teaching how to assess impact of R&D as well as how best to communicate results to inform policy.
The school is hosted annually in a different continent with the spirit of incorporating global perspectives to address local needs. The inaugural school was held in Barcelona, Spain in 2013, and participants from 17 different countries attended.
The second school was held in Banff, Canada in 2014, doubling the number of participants. The school was oversubscribed and in order to meet demand Alberta Innovates – Health Solutions (co-founder and organizer of the School) has established local courses based on ISRIA curricula that will be run annually to as long as the need exists.
Since ISRIA was created, more than 200 people have been connected, developed their own RIA plan and shared their implementation experiences. Alumni of previous schools also participate in future events (including a new ISRIA workshop in The Netherlands in collaboration with the Rathenau Instituut), sharing their experiences of the implementation of their RIA plans.
If you are interested in being part of this community of practice, please join the Linked-in group called ‘The international school on research impact assessment’ or send an email to firstname.lastname@example.org notifying your interest.
The next ISRIA School will be held in Doha, Qatar on 8-12 November 2015. Registrations are now open, http://www.qnrf.org/en-us/ISRIA and R&D policywonkers around the globe are welcome to attend!
We hope to see you in Doha!
Steering Committee of ISRIA
Executive Director of Performance Management and Evaluation
Alberta Innovates – Health Solutions, Canada
Blog AquAS in English http://blog.aquas.cat/?lang=en
The UK invests nearly £30 billion a year in research; £7 billion from public sources and £17 billion from private sector, with the remainder of expenditure coming from abroad. This money funds a spectrum of ‘basic’ and ‘applied’ research, from improving our fundamental understanding of the cosmos to testing the effectiveness of new drugs on patient populations.
There is considerable interest in understanding the value or societal ‘impact’ of these research investments, especially those supported by the public purse. For example, in the UK, REF 2014 included an assessment of impact through the peer review of 6,975 case studies, whilst the research councils and medical research charities have implemented an annual impact survey through ResearchFish.
There are many methodological challenges to the rigorous and robust assessment of research impact including, for example, the time between investment and impact, the issue of how to attribute impacts to multiple research streams and how to assess the value of different types of research impact. Each of these questions present an important area for scholars interested in the ‘science of science’.
In 2011 the MRC highlighted the first of three call for grants on the Economic Impact of Research ‘to understand better the link between research and wider economic and societal impacts, and to use this understanding to improve strategies for the future support of research’. This included studies attempting to improve the methodologies underpinning the assessment of research.
We – a team from King’s College London, RAND Europe and Cardiff University – submitted a successful grant proposal to try an experimental approach to determine how researchers and the general public value different types of research impact. Although some efforts have been made to identify and quantify the impacts of biomedical and health research, little is known about how the public values these impacts and how the public view compares with that of researchers.
The study aims to address this gap by refining and adapting a survey-based approach known as Best-Worst Scaling (BWS) to analyse the relative valuations of research impact as perceived by both the general population and researchers. This is the first time that BWS has been applied to the valuation of research impact, although a previous Canadian study demonstrated the utility of traditional discrete choice modelling approach in valuing research. BWS provides additional insights over traditional discrete choice modelling.
At the outset of the study, we identified a set of different types of impacts that survey participants could value. For example ‘new knowledge’, ‘health gain’, ‘location of job creation’, and so on. Eight domains were identified by reviewing the literature on existing impact taxonomies, undertaking four focus groups with the general public and a series of key informant interviews with researchers. For each of these domains we then had to identify four levels of potential impact which were substantially different from each other. We then piloted these domains and levels in a BWS experiment with the general public and a small sample of researchers.
In the survey, participants take part in 8 experiments where they are asked to identify the ‘best’ and ‘worst’ of 8 impact statements that are drawn from an underlying experimental design – one statement for each of the domains. Respondents also provide their preference for second best and second worst impact. Using these responses we are then able to model the marginal utilities of different types of research impacts and from this determine their relative valuation.
Responses from the pilot, though small in number, were interesting and encouraging for the main stage of data collection. Through this blog we hope we provide further information that will encourage more researchers to take the survey. As the pool of researchers available for this study is limited, every response contributes significantly towards the success of this study.
If invited, we hope that you will take part in the full survey which will run from the second week in February until just after Easter. This is a novel and important project relevant to science policy and today’s broader discussions on research impact.
Thank you in advance for your participation!
Jonathan Grant, King’s College London
Peter Burge, RAND Europe
Dimitris Potoglou, Cardiff University