Good Practices

Other Resources

DORA’s ultimate aim is not to accumulate signatures but to promote real change in research assessment.  One of the keys to this is the development of robust and time-efficient ways of evaluating research and researchers that do not rely on journal impact factors. We are keen to gather and share existing examples of good practice in research assessment, including approaches to funding and fellowships, hiring and promotion, and awarding prizes, that emphasize research itself and not where it is published.

If you know of exemplary research assessment methods that could provide inspiration and ideas for research institutes, funders, journals, professional societies, or researchers, please contact DORA.

ASAPBio

ASAPbio (Accelerating Science and Publication in Biology) tracks preprint policies and practices at journals, funders, and universities.

Assessing Scientists for Hiring, Promotion, and tenure

Six principles for hiring, promotion, and tenure were developed at a one-day workshop in Washington DC in January 2017 to address incentives and rewards in research assessment. These principles were published as part of the perspective piece below.

Moher D, Naudet F, Cristea IA, Miedema F, Ioannidis JPA, Goodman SN (2018) Assessing scientists for hiring, promotion, and tenure. PLoS Biol 16(3): e2004089.

 

Center for Open Science

The Center for Open Science has a collection of open science policies at universities and examples of job announcements that mention open science. The Open Science Framework (OSF) also has a project that archives job offers that require or suggest an open science statement from applicants.

The Leiden Manifesto for Research Metrics

The Leiden Manifesto provides ten principles for the appropriate use of metrics in research evaluation. These principles can be used to maintain accountability of both evaluators and the indicators they use in metrics-based research assessment.

Transparency in Author Contributions in Science (TACS)

The National Academy of Sciences in the United States created a webpage to track journals that are engaging in fair authorship practices. The page monitors criteria for authorship, responsibilities of the corresponding author, requirement for ORCID iDs, and adoption of the Contributor Roles Taxonomy (CRediT). A related white paper, which led to the creation of the TACs webpage, provides recommendations for research institutes, funders, and societies to increase transparency in author contributions.

UK Forum for Responsible Research Metrics

In 2014 the British government commissioned an independent expert group (supported by the former Higher Education Funding Council for England, now Research England) to critically examine the prospects for using metrics in the evaluation of research, notably in the Research Excellence Framework (REF), a nationwide exercise run every 6 years to assess the quality of research being done in UK universities. The resulting report, The Metric Tide, which was published in July 2015, provides an extensive review of the literature on peer review, the use of metrics and altmetrics, and a statistical analysis of the predictive power of various numerical indicators (including the JIF). The expert group concluded that, while some metrics may sometimes be a useful adjunct to peer review, they should always be used carefully and with due consideration of context. The report recommends institutions of higher education consider signing DORA or applying DORA principles for the responsible use of metrics.

The expert group recommended that a UK Forum for Responsible Research Metrics was established to provide the UK funding bodies advice on the use of metrics in the REF, to provide advocacy and leadership in the UK, and to establish links internationally. More information, including the Forum’s advice on REF 2021 and a UK progress report can be found here.