What do preprints need to be more useful in evaluation?

By Naomi C. Penfold & Jessica K. Polka (ASAPbio)

When scientists publish a journal article, they are doing more than just disseminating their work: they’re attaching it to a journal title that will, rightly or wrongly, telegraph signals about its quality. Preprints help to unbundle communication from the other functions of journal publishing, and they allow evaluators—funders, hiring committees, and potential mentors—to read a candidate’s most recent work. But there are situations in which this direct evaluation is difficult: for example, when there is too little time to read all papers for all applications, or when evaluators are not experts in an applicant’s immediate subject area. Therefore, it’s unsurprising that shortlisting for evaluation is often still based on journal names and/or impact factors. Without new ways to communicate summary indicators of quality from close expert reviewers to panel assessors, the utility of preprints in evaluation is limited.

How might indicators of quality be developed for preprints to make them more useful in research evaluation? We present three hypothetical processes to envision in 10 years time:

Transparency of reporting

To ensure that research can be scrutinized as necessary, adequate information needs to be transparently disclosed. A service could check to see that information (methods, data, code, disclosures) is complete and available.

Examples: CONSORT, STAR Methods, COS Open Science badges

Who might use it: Funders who want to ensure that their grantees are adhering to data sharing requirements (or even simply to find all outputs they support), or other evaluators (journal editors, etc.) could be more comfortable investing time in the evaluation with the knowledge that no information is missing.

Methodological rigor checks

Services could evaluate preprints to determine their adherence to community best practices. These services could review code, data, or statistics; detect image manipulation; and ensure that experimental techniques are technically sound. Extending on this, the newly announced mechanism (TRiP) to facilitate posting reviews on bioRxiv preprints could help provide expert reflections on the soundness and quality of the work in the form of transparent review reports earlier than (and perhaps in lieu of) any shared by journals.

Examples: Editorial checks offered by Research Square

Who might use it: PIs, when seeking to verify that the work going out of their lab meets their quality standards (especially helpful when collaborations result in interdisciplinary papers containing some work that is outside the lab’s regular scope of expertise), and also when hiring for their lab to see if the applicant’s previous work is rigorous and meets the community’s standards.

Overlay journals

Community interest in a paper could be recognized by its selection into a curated collection.

Examples: biOverlay, Discrete Analysis

Who might use it: General readers outside of a disciplinary niche, including evaluators looking for candidates who generate broad interest work, whether selecting new faculty candidates or funding grantees.

We present these scenarios to prompt exploratory discussion about the potential for preprints to help us move beyond journal-level indicators by nucleating evidence that assists article-level evaluation of science and scientists. Looking ahead, we question:

  • At which point in the publishing process do these scenarios naturally lie? Does it make sense to move them to the preprint stage?
  • Whom do they benefit? Who would pay for them?
  • What are the barriers (community-specific and general) toward establishing these scenarios?
  • What fraction of the community would need to use these models to effect widespread change? For example, assume that 5% of researchers voluntarily applied for methodological rigor badges. How would this action impact evaluators (funders, hiring and promotion committees, and potential mentors) and other researchers?
  • Finally, how might detailed preprint reviews and/or a combination of preprint quality indicators be accurately condensed into some indicator(s) that is concise and accurate enough to be useful when shortlisting? How might evaluators select the indicators that are most important to them?

Competing interests

We are employed by ASAPbio, a non-profit that promotes the productive use of preprints in the life sciences. ASAPbio is collaborating with EMBO on Review Commons, a new journal-independent platform that peer-reviews research papers before submission to a journal and uses bioRxiv’s TRiP mechanism to post review reports on preprints.

Public web commenting version of this doc at https://docs.google.com/document/d/1ztp33wl80HLp4Sd-zvcufHWZlkQRBUCY5QcIwnT6_n4/edit?usp=sharing

Share This

Copy Link to Clipboard

Copy