Wednesday, July 29, 2009

Anatomy of peer review

Just did a survey from "Sense About Science" on the relevance and adequacy of peer review in academic research. It gave me the chance to vent some of my frustrations with the process as it stands -- incompetent reviewers, editors wasting my time as a reviewer on unmitigated trash that they should have rejected out of hand, acceptance/rejection decisions that clearly ignore the reviewers' recommendations, and the biggest one of all: time from submission to acceptance and ultimately to publication. The last gripe is of particular importance in the face of today's electronic world and the fast pace of issues evolution: what is relevant today may either be irrelevant or too-little-too-late tomorrow.

But let's step back and define peer review process, its purpose and how well it accomplishes it. What is peer review? Well, it is just that: a review of the write-up of your research by a group (usually 2-3, but I have had as many as 7) of your peers. Here is how it usually works, for those who have not partaken of the publication process. You submit a paper to the editor of the journal of your choice asking to consider your work for publication. The editor in most cases will take a quick look at the paper and decide whether it even merits review or whether it is a complete waste of time. I have to say that, as a reviewer, I appreciate this part of the process and view its absence as disregard for my time. Once the editor has decided that the paper is worthy of further critical review, he/she, sometimes with the help of the authors, identifies the appropriate reviewers and solicits them to undertake the review with a 2-week turnaround. If the reviewers accept, all is well, and if not, other reviewers need to be solicited.

Now, "peer" in research is only a little less difficult to define than in the judicial system. Is it anyone with an advanced degree comparable to yours? Is it well-known scientists in the field that you write about? Is it the select group of people who are intimately familiar with the narrow questions that you focus on? Go too broad, and the reviewers have no context for your work, and context is so important to understanding whether you are presenting something of value or just composting old rubbish. Go too narrow, and you get into the "everybody-knows-everybody" situation, where turf and personality wars may win over substance.

Well, OK, so now there are reviewers willing to cast a critical eye upon your work, now what? If you are lucky, you get an e-mail from the editor in 8 to 10 weeks with 1). outright rejection, or 2). immediate acceptance with no or minor revisions, or 3). extensive comments from the reviewers, each of which has to get addressed, line by tedious line in both the manuscript and a separate document aimed at soothing their concerns. Now, don't take me wrong, many a paper of mine has benefited from careful and cogent reviews from very smart people. On the other hand, I have also spent countless hours trying to be complete and respectful in my responses to utter idiocy.

So now, after spending days to weeks responding and revising, you are ready to resubmit. And here is another place where some journals are great, where the editor takes the time to look at the responses and makes a quick decision to accept or reject. Others will send the responses and revisions back to the reviewers and you have to wait another 6-8 weeks for them to get back to you. If you are lucky, all reviewers are now happy and have recommended acceptance; if not, you have to draft the second round of responses and revisions and the wait begins all over. My favorite is when in the third round of reviews from 7 reviewers, all but one are happy, and the last one has found a missing comma, which you have diligently corrected, and the journal returns your responses and revisions to the reviewer for their approval... Absurd, but has happened to me.

Finally, you have satisfied everyone and your paper has been accepted for publication. The trend nowadays is for journals to have a rapid online publication shortly (within days to weeks) after acceptance. However, with some journals, even their e-publication takes another 3-4 months. Argh! So, now it has been 10 months since you first submitted your paper to the time it sees the light of day. And the results are embargoed until publication, so you cannot share with your peers, the public, the press, or even your mother at the risk of being spanked (well, OK, just withdrawn from publication). Is this really the best we can do in the 21st century?

All of my ranting notwithstanding, peer review is necessary, as major decisions about our health are made based on this research. The question is how can it be improved within the context of our current needs and capabilities. What would it be like if we moved away from chunking science into packaged and self-contained bits we think of as manuscripts and started to think of this work as constantly evolving? Here is what I mean: I put a paper with my recent data on a web site where anyone can come and look at it and critique it, provided they are willing to disclose their identity and credentials. I get to see their comments in real time and respond to those that seem legitimate to me (or to the site's editorial board) and revamp my paper accordingly. Yes, things become a bit of a moving target, but this type of process incorporates the dynamic nature of the evolution of scientific thought. It would for sure take us out of our comfort zone, but it may also prevent traditional journals from getting irrelevant.

Or perhaps, in Twitter fashion, we should hold all our scientific communication to 140 characters or fewer?

1 comment:

  1. I'm fairly conflicted in some matters when in comes to peer-review. I deplore the increasing no. of journals who advertise themselves as peer-review but are not recognisably so (I won't cite my usual example as they appear to be attempting to tighten up their procedures but they should know who they are :D).

    Your proposal about sharing data has some interesting points of correspondence with that proposed by Profs McCormack & Greenhalgh et al: Seeing what you want to see in randomised controlled trials: versions and perversions of UKPDS data.

    "We put it to the editors of medical journals that they should, in the interests of minimising interpretation bias, require investigators initially to present the results of clinical trials with a minimum of discussion so that individual clinicians and patients can decide if the results are clinically important. In addition, we suggest that editors should continue to provide space for readers to enter a discourse about the meaning and clinical importance of those results, and indeed they should actively stimulate discussion, perhaps by encouraging publication of dissenting views. Furthermore, when new evidence challenges old beliefs - let it."

    I strongly agree that there has to be a better way of achieving a high quality peer-review process. Sadly, some organisations need to grasp that this is so important that it can not continue to be supported by what is notionally people's 'leisure' (euphemism for unrecompensed) time.