Mary Baginsky is Senior Research Fellow at the Social Care Workforce Research Unit at King’s College London. (1,005 words)
I have been conducting research for more years than I am prepared to admit – sometimes to myself. Much of it has been qualitative in nature, although often as part of a mixed approach methodology. So quantitative data have helped to define the issues to be explored in more depth through interviews and focus groups, for example, or qualitative work has been used to help identify the issues that should be explored through a survey. I am used to weighing concepts such as reliability, validity and generalisability when designing and carrying out projects and reporting their findings, whatever their provenance. I am also as fallible as anyone in not seeing an inherent flaw – that is where the wisdom of colleagues and, however imperfect, the peer review process can be invaluable. However, my experiences of the past few years have given me cause for concern. In conducting a review of evidence on an area where I have been working I have been shocked by some of the things I have found. It has led me to wonder if I would find the same level of errors if I looked long enough in other papers, books and articles.
When I first started to look at the subject I came across some studies that were reported extensively, only to discover a close association between the originators of the area being investigated and the reports produced. There is an inevitability in commissioned evaluations that the group doing the commissioning will have early access to the findings, but surely the credibility of the evaluators and the evaluated are brought into question if the commissioners are also amongst the authors. I also found studies referred to as research and evidence that on closer investigation and many emails sent to secure original reports turned out to be at best the greyest of literature and at worst no more than opinion and think pieces.
The subject of the study is an approach that has been adopted widely by social care in this country and in many others but its evidence base has yet to be established. There are the studies that have been written by practitioners, sometimes published in journals, based on recent and limited implementations, but where extensive claims about efficacy are made on the basis of light touch investigation producing accounts of experience rather than evidence. There are also external evaluations commissioned by local authorities to examine what, if any, difference the approach was making. In some cases, but not all, the findings are reported within the constraints of the data examined and the timescales imposed. In others literature contributing to the methodology and arguments around the findings has been misinterpreted or misunderstood which raises questions about the application of the design and about the rigour with which the work was conducted.
Most researchers would expect to find fewer issues where literature has been peer reviewed and in the majority of instances I trust this would be the case. However, I have encountered reports written in this country and elsewhere where findings, including my own, have been misreported to substantiate findings. I have also found studies where quantitative data have got through where there are elementary mistakes – the figures do not add up or where there are errors in claims of statistical reliability. It seems odd that while these are evident to one reader they were not identified by the peer reviewers. But of most concern was a report that was quoted in a chapter of a book that was then picked up by a review of the evidence that was conducted on this initiative. The reference subsequently appeared in one very influential report produced for a Government department as well as in an article in a high impact peer reviewed journal. The citation was clearly erroneous as it could not have been published in the way described, but errors do occur. I tried to trace the report back through those who had quoted it, but without success because each had relied on a reference rather than the original. Eventually I was able to obtain a confirmation from the authors of the chapter in an edited book where the study and reference had been reported that it had never been published and my attempts to obtain further details of the authors came to nothing. This is a wake-up call. I shall stick my neck out and say there are many researchers, myself included, who have taken a reference from a reliable source and used it to evidence a statement. This is not something I shall ever do again – unless I see the original and weigh the evidence it contains it will go uncited.
So, as well as some soul searching, all this has led me to reflect on how wide spread these experiences are. When I have discussed them with colleagues I have come across similar experiences. I am certainly not suggesting they are common, but they do exist and there are a number of possible explanations. One may be the pressure to publish in order to build or maintain reputations and obtain grants, whereas in the long term they jeopardise the chances of both. They also risk the reputation of others – if one researcher or author can act like this might others do similarly? Another explanation may be carelessness or inexperience, which while not acceptable, are less heinous than deliberate misrepresentation. Despite the questions I raised above about peer review it is an essential process that should be more widely applied. Not only should it be used by journals it should be the norm for all published books and commissioned reports. Assessing the reliability of any study requires the reader to be able to make a judgement about the strength of its findings in relation to its claims and future application of the research. I still assume that most research reports I read will allow me to do this, but we all need to attune our antennae – perhaps more finely than we imagined – to identify those that will not.
Mary Baginsky is Senior Research Fellow at the Social Care Workforce Research Unit at King’s College London.
Very interesting. I think that natural sciences are much better at spotting and correcting mistakes post-publication, and even correcting the record through errata.
Check-out ‘pubpeer’.
(Although the anonymity opens door to some less pleasant sniping).
Mark Wilberforce