This document provides guidance on the use of several indicators (sometimes called metrics) used in research assessment: the Journal Impact Factor and other measurements of journals, citation counts, h-index, field-normalized citation indicators, and altmetrics. Five principles guide the use of these metrics: be clear, be transparent, be specific, be contextual, and be fair.
To support further understanding of the complexities and potential problems with indicators DORA produced a guidance document which applies the principles underlying its original declaration to other quantitative indicators that are sometimes used in research evaluation. These indicators include the h-index, citation counts, and altmetrics.
Because no indicator can capture the complexities of research quality in one number, this document describes five principles that can help prevent indicators from being misused in assessment.
- Be clear.
- Be transparent.
- Be specific.
- Be contextual.
- Be fair.
The guidance avoids simple recommendations like, “Do / do not use this indicator.” Instead, it describes limitations of the indicators to help provide nuance in the application of quantitative indicators.
The guidance was developed by a DORA working group, led by Professor Stephen Curry, previously Chair of DORA. We are very grateful for feedback provided by an international group of bibliometricians and practitioners in research assessment during the preparation of this guidance. We would welcome any further comments that might help to refine the guidance further.
The guidance is intentionally short and accessible, and does not assume extensive knowledge of bibliometrics.
This guidance is available on the DORA website, Zenodo, and Google Docs. A summary is available as a PowerPoint slide deck.
Produced by the DORA Research Assessment Metrics Task Force: Ginny Barbour; Rachel Bruce; Stephen Curry; Bodo Stern; Stuart King; Rebecca Lawrence.