The research assessment ecosystem hinders innovative publication models that address our biggest problems in research

Guest post by Rebecca Lawrence, Managing Director F1000 (part of Taylor & Francis Group), and Vice-Chair DORA.

The traditional publication model of submit then peer review (mostly anonymously) and then publish has been the mainstay of research publishing for hundreds of years. It made a lot of sense in the print world where page restrictions were crucial to manage costs of printing and distribution and interactions were verbal or via the post. However, the digital age enables us to rethink this process to better democratise access to, and sharing of, new discoveries; accelerate research discovery; and better support their translation into impact.

At the same time, together with the recent growth of AI capabilities and large language models (LLMs), we are seeing an explosion of problematic behaviour and content, ranging from poor research practice such as inappropriate citations, p hacking and selective reporting of data, through to fraudulent behaviour such as image manipulation, papermills, fabricated results and more (Alam 2023). All these behaviours, whether deliberate or not, are causing damage to our ability to trust parts of the scholarly record. This risks further damaging public trust in research, reducing the willingness of some parts of society to take up solutions to societal problems (such as vaccine hesitancy), and potentially risks public support  for research funding more generally (as we are already seeing in some parts of the world, e.g. here, here and here).

The innovative F1000Research open research publishing model was launched in 2012 and was born out of a desire to start again, thinking about the technology and tools at the time and redesigning how we share and critically review new discoveries and support others to then build on them. Transparency is a crucial theme throughout the model, with the aim of minimising biases in both what is viewed and valued by the community, as well as supporting verification and consequently rigor and integrity.

Prior to the launch of BioRxiv and larger scale preprinting in the life sciences and most other disciplines except those covered by arXiv, the F1000 model was the first to incorporate a preprint element. It was the first publication to require that all data and code underpinning any experiments described in the article must be made FAIR (Findable, Accessible, Interoperable, Reusable). And it was the first to require fully open peer review (open reviews and open identities) on all publications, and not just those that make it through the peer review process and go on to publication as in traditional journal models.

The purpose has been to create a truly author-driven model to enable any researcher to share any and all their findings, when they are ready, and regardless of scope or perceived impact, as long as the work meets stringent ethics and integrity requirements and any community standards. Articles are published should they pass these requirements, and labelled as ‘awaiting peer review’, when expert reviewers are invited and their ensuing reports are published alongside the article together with reviewer identities. Authors can, and usually do, respond to the review reports, and can revise as many times as they wish. If an article achieves an adequate level of positive review, it is deemed as having ‘passed’ review and is then indexed by major indexers such as Scopus, PubMed, MEDLINE and others.

There are now numerous venues and publications using many different variations on this kind of approach, but mostly using the essence of what is now typically described as the Publish-Review-Curate model. Prominent examples include eLife, MetaROR, Review Commons, and Peer Community In, to name just a small handful. It is a burgeoning and diversifying community of innovators and experimenters with their own flavours of this approach for their own research communities. There is also a growing group of organisations developing different types of verification markers for published content to enable users of all types and backgrounds to understand how much to trust the content, as an alternative to the journal brand or Journal Impact Factor (JIF) as a misleading proxy.

The challenge isn’t innovation, it is adoption

As I highlighted at the Royal Society’s Future of science publishing event in July 2025, for all of these growing number of examples, the same issues remain around the limitations of the infrastructures and systems that underpin the scholarly system that are still not set up to support such experimentation or to enable their large-scale take up by the research communities. This is despite strong support from many parts of the research community who can see the obvious advantages around speed, reduced wasted effort hawking an article around different journals, and the (potential) ability to gain credit for all your research outputs, including your peer reviews.

The biggest challenge is the way that we typically currently assess and incentivise researchers and the research processes that they use, which together in effect disincentivises the use of these new publishing approaches and models. The lack of recognition for publications of content outside of those in journal-like venues with associated journal-level metrics such as JIFs hinders researchers in adopting new models, despite regular community calls for greater experimentation in publishing. Similarly, the lack of recognition for outputs beyond traditional research articles or activities beyond publishing.

At F1000, one reason for partnering with many major funders such as Wellcome, the Gates Foundation, the European Commission, and numerous institutions and learned societies, was to give prospective authors the reassurance that their publication would ‘count’ despite a lack of a JIF. Indeed, this has worked as most of these platforms we provide for funders are now the single most used venue for publication by their grantees. However, there is still significant hesitancy by grantees to publish their most significant findings there or in related venues without the associated funder brand. This is despite the obvious advantages of being (technically) able to receive credit for all the outputs of your research (not just the major or positive findings) and the speed with which new results can be shared and start being built upon by others, as well as a more accountable peer review approach.

Not only that, but the current traditional research assessment process is in fact the major cause of the problems we see around research integrity and trust in research, and around the lack of speed and progress in research discoveries and associated impact that many of these new publication models are designed to address.

We need a three-step process

Step 1: Policy change. Whilst it is true that the problems with inappropriate use of journal-based metrics have been known for a long time and progress to-date has been slow, there appears to be growing recognition of the issues by those who can make a difference (namely, research performing organisations and research funding organisations) and growing momentum for change.

DORA has been running since 2012, initially as a declaration but now as a truly global initiative developing tools and frameworks to support organisational and policy change, running communities of practice, and capturing the growing case studies and exemplars of alternative ways of evaluating research and researchers. DORA is now working increasingly closely with the plethora of other major initiatives aiming to tackle the same issue but from different lenses, such as CoARA, FOLEC-CLACSO, GRC Working Group on Responsible Research Assessment (RRA), Science Europe, HELIOS Open and more.

As part of this, DORA has co-developed an Implementation Guide on RRA for Research Performing Organisations to help support taking the first steps on the journey away from traditional research assessment practices. DORA are now working on a similar guide for research funders as well as for publishers.

Step 2: Training. Whilst it is reassuring to see the growing number of research departments and institutes, research performing organisations, research funders and others adopting approaches and policies that support RRA, I believe we need to be also working to ensure that those actually conducting those assessments also change their practices. We therefore need to do much more to train researchers (especially more senior colleagues) on thinking differently when they sit on assessment panels, act as committee chairs, etc. And all of us have a responsibility to call out poor and problematic practices and behaviours in any such discussions and processes, as and when we see them. Only by doing this will we change the tide on what is perceived as the norm within the community, much as we have seen in other walks of life.

Step 3: Instilling belief that the new approach is actually being used. Finally, we need to build up trust in the system at all levels (from researchers themselves, through to research administration departments when submitting to national assessment exercises such as the UK REF) that following new policies and using new alternative approaches to research assessment genuinely will maximise your chances of success (whether it be funding or career prospects). Often the stakes are so high in these decisions (it could impact the funding of a whole university, or be the difference between having a career and hence a job and money on the table, or not) that I often hear people saying they can’t afford to risk following the new policy as they are not convinced that those in the room doing the assessment won’t resort back to the old journal-based metrics. We therefore need to publicise the growing number of exemplars to build trust from researchers and anyone submitting to such an assessment exercise that the assessment system they are in has genuinely changed.

If we can continue to build on the growing momentum for change in research assessment then we will not only make a significant difference in addressing the growing number of challenging practices and behaviours that are negatively impacting research integrity and trust in research, but we will also enable significant uptake of those alternative approaches to the sharing of new discoveries so that they start to become more of the norm.

Share This

Copy Link to Clipboard

Copy