Why are indexers unable to see that peer review can be more than a thumbs up or a thumbs down?

Guest post by Fiona Hutton, Damian Pattinson, and Peter Rodgers, from eLife.

Journals and peer review are unlikely to change for the better if the companies that operate scientific indexes and citation databases continue to stifle innovation.

A great deal has changed in scientific publishing in the past few decades – preprints, ORCIDs and DOIs, open-access journals, megajournals, data and code in papers, more detailed methods sections, and many other developments. At the same time, some things have hardly changed at all. In particular, the peer-review process at journals works much the same as it always has: journal editors decide which submissions should be peer reviewed, peer reviewers comment on the strengths and weaknesses of submissions and, ultimately, the submission is either accepted or rejected.

Of course, there is a reason for this – authors and readers really value the role that peer review plays in maintaining standards in the scientific literature.

But peer review is not perfect. It can be slow, it is prone to bias, it is not well-suited to spotting fraudulent research, and it can delay or prevent the publication of truly innovative work. Moreover, although a referee typically spends about five hours peer reviewing an article, at most journals all this work is reduced to a yes-no decision, and the insights contained in the peer-review reports rarely see the light of day.

Authors also compete for space in a small number of “high-impact” journals in the hope of publishing a paper that will secure them tenure or their next grant, job or promotion. This has consequences:

  • We focus on the wrong thing: Too much attention is paid to the name of the journal in which an article is published, rather than the actual scientific content of the article.
  • Time and effort are wasted: Most articles get published somewhere, eventually. If rejected at their first-choice journal, authors move on to a second journal, and sometimes to a third or fourth …. The end result is that the time spent by referees is often wasted as the peer reviews on rejected papers are never published.
  • Inequality is entrenched: Access to top publishing venues is easier for well-resourced researchers at prestigious institutions, creating a cycle where visibility and career progression are tied to established networks rather than the quality of the research.

eLife is committed to changing how research is peer reviewed and published. In particular, we want the publishing process to be more open, to be more efficient, to be more affordable, and to serve the needs of scientists rather than the needs of publishers.

The eLife model

At eLife we believe that the content of a research article is more important than the name or impact factor of the journal in which it is published. Our approach to scientific publishing – which we adopted in 2023 – combines the immediacy and openness of preprints with the expert evaluation provided by peer review.

eLife does not accept or reject articles after peer review: rather, every submission that is selected by editors for peer review is published on the eLife website as a Reviewed Preprint. This is a new type of scientific publication that includes the article, feedback from the expert peer reviewers, and a response from the authors (if available).

During the review process each peer reviewer is asked to write a Public Review that describes the strengths and weaknesses of the article, and to recommend ways in which the authors could improve it. The editor and the peer reviewers also write an eLife Assessment that summarises the significance of the findings (on a scale ranging from landmark to useful) and the strength of the evidence (on a scale ranging exceptional to inadequate).

We believe that this approach is better for science and scientists than one that relies heavily on journal names and journal-level metrics like the impact factor.

This new model has clear implications:

  • Peer review is a discourse, not a barrier. By publishing the reviews and the authors’ response to them, the eLife process increases openness, trust, and accountability, and allows readers to see where there is consensus and where there is disagreement. Readers can also see how and if the authors have engaged with differing viewpoints.
  • The peer review process is more efficient. eLife publishes reviews for all articles that have been selected for peer review. Most journals, on the other hand, only publish reviews for articles that have been accepted for publication. The reviews for articles that have been rejected are not published, which often leads to further rounds of review and rejection – which wastes a great deal of researcher time and often means that significant flaws raised by reviewers are never made public or addressed.
  • Career progress is based on the merit of the research. Funders, hiring panels and collaborators can evaluate researchers based on articles that have been assessed by experts on an article by article basis, rather than having to rely on the journal name as a proxy for the merit of the research.

Reaction to the eLife model

For many years eLife has been included in the Science Citation Index Expanded (SCIE), which is part of the Web of Science platform that is maintained by a company called Clarivate. Being included in the SCIE meant that eLife had a journal impact factor. In June 2025, however, eLife was moved from the SCIE to the Emerging Sources Citation Index (ESCI) and had its impact factor removed. eLife was also being moved from the Scopus Journals Collection to the Scopus Preprints Collection.

According to Clarivate: “When making any policy decision [. . .] we cannot make exceptions or create specific policies for individual journals that might compromise research integrity. Cover-to-cover indexing of journals in which publication is decoupled from validation by peer review risks allowing untrustworthy actors to benefit from publishing poor quality content, and conflicts with our standard policy to reject/remove journals that fail to put effective measures in place to prevent the publication of compromised content.” According to Scopus, which is owned by Elsevier: “the certification in the current model of eLife is not fully consistent nor complete. Scopus, in consultation with the CSAB [Scopus Content Selection and Advisory Board], concludes that eLife does not fit the criteria and definition of a journal in Scopus anymore.”

We have a number of issues with these statements. First, as mentioned above, while researchers greatly value the role that peer review plays in maintaining standards in the scientific literature, peer review is not perfect. Peer review certainly improves research papers, but it does not stop papers that are flawed – or fraudulent – from being published. To state that peer review somehow validates or certifies papers is to overstate its function in the publishing process.

Moreover, thinking of peer review in terms of validation or certification is to think of it as a binary process – papers are either validated/certified or they are not. However, reality is not like this – papers fall on a spectrum or, more accurately, on a number of spectra. At eLife, for example, we assess papers on significance and strength of evidence (see above). The fact that the same paper can be rejected by one journal and then accepted by another shows that it is not helpful to think of peer review in terms of validation or certification.

We also contest the claim that approaches in which “publication is decoupled from validation by peer review” might lead to the publication of compromised content. eLife takes research integrity very seriously, and conducts a series of checks on articles (before and after they are sent for review). Further checks are carried out before articles are published as Reviewed Preprints on the eLife website, and any concerns about research integrity raised during the peer-review process are investigated. Moreover, the fact that all articles peer reviewed by eLife have to be available as preprints increases scrutiny of these articles – both by readers and also by the preprint server itself, which conducts its own checks.

Some have pointed out that the new eLife approach to peer review and publishing means that papers which reviewers have deemed to be “incomplete” (main claims are only partially supported) or “inadequate” (methods, data and analyses do not support the primary claims) in terms of strength of evidence are being published alongside papers that reviewers have deemed to be “solid” or better . This is true but this criticism overlooks two key points: i) while the first version of a paper may, for example, be “incomplete” , there is a very good chance that the second version will be improved if the authors can address the feedback from the peer reviewers; ii) clearly stating – as eLife does on occasion – that the evidence in an article is “incomplete” or “inadequate” is surely better than not saying anything and simply waiting for them to appear in another indexed journal, accepted by a different group of editors or reviewers with a different (opaque) set of criteria.

While we acknowledge that allowing papers with poor study design or methods to be published seems at first glance to be counterintuitive, doing so with the critiques made immediately visible to any reader, as our model does, is a far more honest and responsible way of communicating science. The alternative – where reviewers and editors can choose what is or isn’t made a VOR by means of the terms they use to describe it – allows the problems of gatekeeping by reviewers (and the cascade of submissions until a journal finally accepts the paper) continue to persist.

What’s important to note is that publishing a paper in which the data are incomplete or inadequate is not – as Clarivate seems to suggest – the same as saying that data have been manipulated or, worse still, that data have been fabricated or plagiarised. The submission of fake papers written by “paper mills” is a genuine problem, as are organized efforts to compromise the peer-review process at journals: however, these problems affect all journals, not just those in which publication and peer review have been decoupled. At eLife, these papers would either not be sent to review, be subject to withdrawal if discovered during the review process or, if published, be subject to correction or retraction.

Outlook

Our recent experiences at eLife show how difficult it is to drive change in scientific publishing. eLife’s model may not be perfect, but it is an honest attempt to show that publishing can be different, and that alternative models are viable. The resistance we have seen should force the scientific community to confront a question that is often avoided: who controls the scientific literature? The answer, it seems to us, is not the scientific community. Scientists might do the research and write the papers; they may referee papers for journals and sit on editorial boards; but somehow power and control over vital issues – what is peer review, what is a journal, what gets published – have been ceded to organizations beyond the scientific community, like Web of Science and Scopus. This is not healthy for science or the scientific community.

Share This

Copy Link to Clipboard

Copy