You’ve signed DORA, now what?

Individuals and organizations sign DORA not just to indicate their support for its principles, but to implement real improvements in their policy and practice for research assessment

For the past five years, the Declaration on Research Assessment (DORA) has been a beacon illuminating the problems due to the excessive attention paid to journal metrics and pointing the way to improvements that can be made by all stakeholders involved in evaluating academic research and scholarship. It has attracted no fewer than 13,000 individual and 880 organizational signatories and growing. Researchers, funders, universities and research institutes, publishers and metrics providers have all committed – at a minimum – not to “use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”

However, signing the declaration is just the first step toward changing practice in research assessment, especially considering how deeply rooted journal metrics have become, and DORA is here to help the community move forward. We realize this can be a tricky process to negotiate, especially for organizational signatories. The roadmap we published earlier this year emphasizes that after signing:

The next and more challenging steps require changes in academic culture and behaviour to ensure that hiring, promotion, and funding decisions focus on the qualities of research that are most desirable – insight, impact, reliability and reusability – rather than on questionable proxies.

The particular next steps that are needed will vary by geography, discipline and the nature of your organization. Initial guidance on these for different actors is provided in the more specific recommendations of the Declaration:

– Funding agencies and institutions: For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice. (DORA Recommendations nos. 3 and 5)

Another valuable step is transparency, so that academics know how they will be evaluated, and so that committees understand how they are expected to conduct evaluations. It is important to explore new approaches to assessment, and with greater transparency around the effectiveness of these approaches, improvements will spread more rapidly. Finally, consider the composition of the committees that perform evaluations and whether a more diverse team (for example by including researchers who are earlier in their careers) would assist in achieving fairer outcomes.

– Publishers: Greatly reduce emphasis on the journal impact factor as a promotional tool, ideally by ceasing to promote the impact factor or by presenting the metric in the context of a variety of journal-based metrics (e.g. 5-year impact factor, EigenFactor, SCImago, review and publication times) that provide a richer view of journal performance. (DORA Recommendations no. 6)

Further actions could include adoption of CRediT – the Contributor Role Taxonomy –  to more clearly identify and recognize the work of each author, the use of ORCID IDs, and support for the Initiative for Open Citations I40C.

– Metrics providers: Be open and transparent by providing data and methods used to calculate all metrics (DORA Recommendation no. 11)

– Researchers: Challenge research assessment practices that rely inappropriately on Journal Impact Factors and promote and teach best practice that focuses on the value and influence of specific research outputs (DORA Recommendation no. 18)

Be a leader – talk to your colleagues and advocate for change. Highlight the breadth of your own contributions, on your lab pages, or via resources like ORCID, in applications for jobs and funding. As an author, reviewer, or editor, do not support journals that use JIFs as a marketing tool or any request to consider journal metrics in decisions on manuscripts.

As the above examples show, there is no one-size-fits all solution. In reforming research evaluation different organizations may be seeking not just to break away from over-reliance on crude metrics, but also to achieve other goals; for example, to reduce bias in hiring or promotion processes, or to provide increased incentives for collaborative work or the embrace of open science practices. But you do not have to start from scratch. On our website, you can explore examples of good practice that we have collected, primarily – but not exclusively – from our signatories. This is a good starting point when thinking about how best to direct your efforts to change research assessment practices at your organization (see box for example).

We will continue to add to this resource and encourage signatories not just to publish what they have done to change their policies and practices in line with the principles of DORA on their own websites, but to share that with us. This is an excellent way for signatories to showcase their changing approach to research assessment not only to inform and empower staff within their organization, but also to inspire other institutions across the scholarly community.

Ultimately, given the international nature of research and scholarship, effective culture change will only take root if reform of evaluation methods occurs on a global scale. To advance this effort still further we are also keen to hear about case studies and examples of good practice from beyond the DORA community. Real change matters to us much more than adding to the tally of signatories.

Example of good practice—the Department of Cell Biology at the University of Texas Southwestern Medical Center, for example, made sweeping changes to their hiring procedure when the chair, Dr. Sandra Schmid, decided to stop hiring by committee and instead opted to include the entire faculty in the process. Any faculty member is able to select applicants for a first-round Skype interview based on their research contribution summary in the cover letter. In deciding the short-list of individuals who should be asked to interview on campus, each person has a designated faculty member advocating for them to promote balanced discussions.

We know that establishing a globally supportive environment for effective reform of research evaluation will take time. But we are encouraged by the renewed impetus for change that we have seen recently from learned societies, from funders, from universities and from publishers. Our hope is that DORA’s signatories will take a leading role in these endeavors, setting high standards, and working collaboratively to improve research evaluation. Ultimately, we believe these efforts will pay significant dividends for researchers and for research.

To tell us about how research assessment is changing at your institution, and how colleagues are being encouraged to reduce their reliance on Journal Impact Factors, please contact us. We are particularly interested to hear how institutions have worked through the practical details of implementing process and cultural changes.

Share This

Copy Link to Clipboard

Copy