Harrys and Bruces: Diversity, Registered Reports and the Casualties of a Replication Crisis

In a crisis, panic: that’s pretty much a working definition of a crisis. Fight or flight, not many will acquiesce to platitudes on their coffee cups and “Keep Calm and …whatever”.  But look a little closer and you’ll see further sub-types of responder. There’s the Harry Enfield, Mr “You Don’t Want to Do That”. An object lesson in hindsight, these folk will explain how they knew all along that there was trouble brewing (although they were strangely quiet until the bandwagon got rolling). And then there are the white knights, riding in to solve the problem. These latter day Bruce Willis’s appear from nowhere, but boy do they know how to fix it! If nothing else, the fact that these archetypes are stalking the field at the moment strongly suggests there’s a crisis in modern psychology.

It’s hard not to pick up on the existential panic around reproducibility, exemplified by Nosek and colleagues’ 2015 paper. There can’t be any doubt that, as issues come (and go), it’s fundamental to psychological science. So well done to @healthpsycleeds and @BPSOfficial for hosting the recent excellent debate at the Royal Society on Replication and Reproducibility in Psychological Science.

We’ll come to that debate, but let’s start with a closer look at these crisis sub-types. Typically, the “Harrys” propose greater restriction on what we can count as the acceptable use of inferential statistics. The system isn’t broken, but it needs to be tightened up. Their solution to a crisis in statistics: more statistics! My instinct is that increased restriction rarely fosters opportunity.  With some prominent hind-sighters one can almost hear every rung of the ladder clicking into place as it disappears.

bruce harry

Then there are the white knights, the Bruces. From what I can see the Bruces, like the Harrys, propose greater restriction in the name of “open science”. (I object to the term, ‘open science’, in this context. No one, least of all me, is going to object to greater openness. But the dominant formulation of the drive for more ‘open science’ suggests something is closed. If there’s widespread misuse of statistics in psychology I don’t believe that’s happening because we’re not open. There may be ignorance, but there’s no collective cover-up here.) In this vein, there’s increasing traction around the idea of using registered reports in journals (perhaps tying in with funding bodies) to address reproducibility concerns. The idea is that methods and proposed analyses are submitted to a journal prior to the research being conducted on the understanding that the research will, ultimately, be published regardless of the research findings. Several journals have adopted the procedure.

Has this really been thought through? I don’t believe it has. The registered reports approach closes down the opportunities for a counter-intuitive result because, like it or not, exploratory analyses will never have the status of “proper”, pre-registered results and thus meaningful innovation is stifled. From a practical perspective there’s a worry is that the registered reports approach will either create huge additional costs to conducting science or will lead to a proliferation of speculative submissions, probably from larger, established labs. Registered reports will create demand that far outstrips the already limited supply of potential manuscript reviewers.

ascending stats

The approach may work in some fields, but my chief concern, as a psychologist is that registered reports will inevitably inhibit diversity in the field. There are two ways this will happen. First, few labs and fewer researchers will ever have the funds, time, or resources to undertake the sort of piloting, planning, and multiple testing suggested by the registered reports approach. As a journal editor, I cannot imagine that the approach would increase the submissions from developing nations or truly diverse researchers and groups.  Second, what about other research methods? There’s nothing wrong with a good case study and, dare I say it, an intelligent qualitative analysis can generate ideas that no experiment can. It’s as sad as it is ironic that the casualties of the replication crisis are likely to be precisely those researchers who weren’t part of the problem in the first place! Psychology deserves a far more radical analysis of its evidence base than just flapping around over p-values.

Anyhow, it feels premature to be setting new, rigid statistical rules because there’s currently little consensus on whether we should refine our use of p-values, move to Bayesian techniques, or re-explore the use of effect sizes and confidence intervals. Sure, let’s carefully and calmly discuss what counts as evidence in our field. Let’s have a measured and inclusive debate. That’s the way any discipline develops. But don’t panic. And in the meantime, my advice? “Keep Calm and do an ANOVA”.

Astrology and the Apathetic Academic

There’s been a slew of commentaries on the Government’s White Paper on Higher Education that was announced in the Queen’s Speech this week. I’m sure there will be plenty of time to chew over the fat of that, and I’ll write about it in the future. But, instead, this week I thought I’d muse on what I see as apathy across much of academia about the changes. Sure there has been a lot of tweeting about a two tier HE system (like we don’t already have at least two tiers?!) and a UCU work-to-rule. But while in many countries across the world academics are seen as an important societal foil to reactionary, unintelligent regimes, there’s only a whimper of discontent in many UK universities. There’s disgruntlement, but not action.

HE Queens speech

If the piecemeal dismantling of one of the country’s most successful sectors (both in terms of finance and reputation) is not enough to inspire, perhaps it is worth appealing to academics’ self-interest. Even the apathetic academic needs to make some decisions. Will the hours spent slaving away over that meta-analysis return to haunt you when the department decides a robust “restructuring exercise” is the measured response to a dismal TEF outcome? Is it wise to forego the research for a while to develop a new module or to flip your classroom? In such uncertain times academics need some sort of guide to work out what to do.

Coincidentally this week I read a couple of new articles on the psychology of astrology. Of course, we know there is no truth in astrology. But plenty of people still believe. It’s a topic that has been surprisingly neglected given its ubiquity. (Check “What’s the Harm in Astrology” for a few really staggering examples of the damage beliefs in astrology has done.)

Psychologists attribute astrology’s appeal to the Barnum or Forer effect: we read what we want to read into vague statements. It gives meaning to uncertainty and that reassures us. Psychologists love this kind of research – work that shows how bad we are at rational decision-making. Irrational, but good business. Around 20% of people in the UK read their horoscope every day. That’s about 13 million people (unlucky, or maybe lucky, for some).

But, also from a psychological perspective, astrology does something else: it absolves us of responsibility for our actions and from making decisions because everything is, after all, written in the stars. On the face of it, then, it’s the ideal decision-making heuristic for the apathetic academic in these uncertain times. And, really, are academics any less superstitious than anyone else? How many of us haven’t metaphorically crossed our fingers before submitting a manuscript, hoping reviewer C won’t be taking out a bad day on the typos?


Of course, astrology is folly. Sometimes its dangerous folly. Nancy Reagan used astrology to “guide” decisions made by her husband Ronald when he was US President for 8 years. A bad lunar opposition to a retrograde Mercury should have had the Soviet’s shifting missiles to high alert. Although sometimes it seems to be quite accurate. Alexander the Great is believed to have used astrology to plan his military campaigns. And they worked… kind of. Julius Caesar ignored an astrologer and look what happened to him. (Well, strictly it was a haruspex not an astrologer, but he was bang on with the dates).

Who’s to know that Jo Johnson isn’t developing the government’s Higher Education policy alongside a well-thumbed ephemeris? After all, it is prudent to check all possibilities before you make policy – typical Capricorn! Perhaps the stars allow us at least to anticipate his decisions, then we can make our own decisions based on that (and of course our own). Seems like a plan.

So what is the appropriate reaction from academics to the White Paper? For many, the answer seems to be to ignore it. The apathetic academic needs to attend to more urgent matters anyway.  And its undoubtedly helpful to have something to tell you what to do if, as an Pisces, you are destined to struggle to decide whether to tackle that pile of essays today or tomorrow, or whether Saturn’s impending conjunction with Venus makes it possible to wander off-piste with the generic marking criteria. In an uncertain world astrology gets you off the hook: it’s a way of avoiding engagement and action against those impending changes in the White Paper. At least for a while…


Welcome, to my first blog.  The plan is to splurge out my thoughts once a week (or so) and to combine commentary, eclectic and random dissemination of ideas, and general “stuff” from research in psychology, education, higher education… and the world. It goes without saying that views, opinions (and jokes) will be entirely my own; I generally try not to express opinions that are not my own. So anything you read here certainly doesn’t reflect King’s or Institute of Psychiatry, Psychology and Neuroscience views.

This blogging endeavour contrasts with the usual lab webpage (or website) but this is innovation, and I actually think there’s some merit in committing to a weekly (or maybe fortnightly) communication. We’ll see!

For my first week I’ll be writing about ESRC. And there are plans in place for some other comments on psychology and the media (the court jester of sciences), Higher Education and, of course, life at IoPPN.

Its very important to blame Laura Gordon for suggesting that I do this. Her gentle insistence meant that I actually produced something, in the end. I must also thank Hannah Simkin for her offer to be a plan B and write something here if I ever fall under a bus, get abducted by aliens, or am forcibly restrained from writing. (I am not sure she agreed actually to do this, but if I write it here I know she’ll feel she has to).  If that happens, I confidently predict that Hannah would write something really good. Maybe then she’ll permanently get the gig?