DORA – accentuating the positive

The San Francisco Declaration on Research Assessment (DORA) is well known for its strong position on the need to eliminate the use of journal-based metrics in decisions on hiring, promotion, or funding of academics.

As such, it is sometimes taken to be an initiative merely focused on criticising the undue influence of one specific metric, the journal impact factor (JIF). But to see DORA just in those terms overlooks the many positive prescriptions that the declaration lays out for how to reform research assessment. We want to make sure that due attention is paid to the recommended actions for improving policy and practice, because this is key to effecting real change in the evaluation of research and researchers.

In addition to DORA’s primary recommendation to remove the JIF from research evaluation, 17 other recommendations are aimed at stakeholders involved in different aspects of the process – funding agencies, institutions, publishers, metrics providers, and researchers. These provide pragmatic ways to adhere to the spirit of DORA by implementing more inclusive, informed, and objective mechanisms to reward research achievement. We won’t list them all here, since the declaration is readily available (in a growing number of translations) from our website, but will give a flavour.

Recommendations 4 and 5, for example, are aimed at universities and research institutions and emphasise developing transparent and rounded approaches to assessment that take into account the content and variety of the different outcomes of research:

  1. Be explicit about the criteria used to reach hiring, tenure, and promotion decisions, clearly highlighting, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.
  2. For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

For researchers themselves, it is recommendations 15-18 that apply, outlining the responsibilities of individual signatories and suggesting examples of good practice:

  1. When involved in committees making decisions about funding, hiring, tenure, or promotion, make assessments based on scientific content rather than publication metrics.
  2. Wherever appropriate, cite primary literature in which observations are first reported rather than reviews in order to give credit where credit is due.
  3. Use a range of article metrics and indicators on personal/supporting statements, as evidence of the impact of individual published articles and other research outputs.
  4. Challenge research assessment practices that rely inappropriately on Journal Impact Factors and promote and teach best practice that focuses on the value and influence of specific research outputs.

In a similar vein, recommendations for funders (2-3), publishers (6-10), and metrics providers (11-14) emphasise the benefits of being open about assessment criteria, of appreciating the importance of diversifying the sources of information used, and of recognising our shared responsibility for ensuring the quality of our evaluation processes.

DORA is not a legal document or a holy text. The recommendations are brief and in large measure suggestive rather than rigidly prescriptive to allow a useful degree of flexibility in interpretation. Some further ideas about different ways to implement DORA are given in this earlier blog post, and additional suggestions for journals and publishers are provided in a recent article. We expect the range of approaches to expand, evolve, and differentiate over time because what gets evaluated may carry different weights depending on local geographical and disciplinary contexts. Reform is not a one-size-fits-all solution and, indeed, we would encourage any experimentation with implementation that adheres to the spirit of the endeavour. Our community manager and members of the steering group will always be happy to discuss any questions about what it might take to get DORA working at your organisation.

For example, DORA does not require signatories to abandon all quantitative indicators. Rather – and here the declaration is consistent and complementary with the Leiden Manifesto – it is a call to use indicators in context, at the appropriate level, and only as supporting evidence in richer, broader evaluation processes.

For publishers, signing DORA does not impose a blanket ban on mentioning the impact factors of their journals. However, we would certainly expect those that have signed to advertise that they have done so (preferably prominently on their journal metrics pages, and perhaps alongside the relevant citation distribution) and to explain what that means for them and their authors. They could, for example, advance the arguments for not using aggregate metrics such as the JIF in the evaluation of individual researchers or their papers. The EMBO Journal provides a good example of this approach.

The community as a whole is still wrestling with the difficulty of balancing quantitative and qualitative aspects of assessment. The rapidly increasing scale of the research ecosystem, the diversification of outputs, and the different norms that adhere to different disciplines give quantitative indicators a lasting appeal. They seem to simplify the task of assessment in many ways, particularly for those operating outside the comfort zone of their expertise and particularly during initial triage when large numbers of applications may need to be evaluated. These are knotty and enduring problems that will require widespread culture change, but that change will only come about if it is lubricated by feasible and credible improvements to process.

DORA very much wants to help the community in this endeavour and is continuing to collect examples of good practice in research assessment from various organisations; please get in touch if you know of innovations that are worth sharing. In addition, we are keen to support and organise workshops aimed at exploring workable solutions. In December 2018, DORA hosted a session at the ASCB│EMBO Meeting to better understand time-efficient alternatives to JIFs reviewers can take when assessing faculty and grant applications. At the AAAS meeting in February 2019, we ran a session to explore strategies for tackling bias in assessment, which can arise from quantitative and qualitative information used to compare people.

These resources may not yet provide all the solutions to the challenges of research assessment, but they demonstrate how actively DORA is working to promote real change. DORA is certainly down on misuse of the impact factor. But it is also shot through with positive and pragmatic intent to lead us to a world where achievements in research are recognised and rewarded in ways that are holistic, robust, and fair.

Stephen Curry is a professor and assistant provost at Imperial College London. He is also the chair of the DORA steering committee.

Share This

Copy Link to Clipboard

Copy