DORA’s ultimate aim is not to accumulate signatures but to promote real change in research assessment. One of the keys to this is the development of robust and time-efficient ways of evaluating research and researchers that do not rely on journal impact factors. We are keen to gather and share existing examples of good practice in research assessment, including approaches to funding and fellowships, hiring and promotion, and awarding prizes, that emphasize research itself and not where it is published.
If you know of exemplary research assessment methods that could provide inspiration and ideas for research institutes, funders, journals, professional societies, or researchers, please contact DORA.
Académie des Sciences, Leopoldina and Royal Society
Three national academies issued a statement on good practice in the evaluation of researchers and research programs in October of 2017. The statement recognized the need for efficient, fair, and robust researcher evaluation, especially as the size of the research community continues to increase.The statement notes that research articles of varying degrees of quality are published in any given journal, and says that “the main criteria [for evaluation] must be the quality, originality and importance of the scientific research.”
Impact factors of journals should not be considered in evaluating research outputs. Bibliometric indicators such as the widely used H index or numbers of citations (per article or per year) should only be interpreted by scientific experts able to put these values within the context of each scientific discipline. The source of these bibliometric indicators must be given and checks should be made to ensure their accuracy by comparison to rival sources of bibliometric information. The use of bibliometric indicators should only be considered as auxiliary information to supplement peer review, not a substitute for it. The use of bibliometric indicators for early career scientists must in particular be avoided. Such use will tend to push scientists who are building their career into well established/fashionable research fields, rather than encouraging them to tackle new scientific challenges.
American Society For Cell Biology
The idea for the San Francisco Declaration on Research Assessment began at the 2012 ASCB Annual Meeting. As one of the initial DORA signatories, the ASCB does not promote its publications by the Journal Impact Factor. The ASCB also implemented a policy that does not allow promotional materials distributed by the society at its annual meeting to advertise journal impact factors.
From the ASCB Media Kit
Reflecting its deep concern about the widespread misuse of journal impact factors to evaluate the outputs of scientific research, ASCB will not permit advertisement of journal impact factors in any of its publications or in promotional materials that it distributes on behalf of exhibitors at the ASCB Annual Meeting.
Federation for the Humanities and Social Sciences | Fédération des sciences humaines
The Federation released a report in 2017 to support the ongoing conversation in Canada about the assessment of research impact in the Humanities and Social Sciences (HSS). It includes five recommendations to approach assessment:
- Define impacts broadly
- Use diverse and flexible sets of indicators, including qualitative and quantitative methods
- Researchers should play a leading role in describing the impacts of research work, in collaboration with research partners and users.
- Assess collective impacts
- Develop institutional supports to enable effective impact assessment
Finding that the impacts of HSS scholarship are highly diverse, the report defines impact as “the influence scholarly and creative inquiry has upon wider society, intended as well as unintended, immediate as well as protracted.” This diversity requires a flexible assessment approach. To help the community understand how the recommendations can be applied, case studies in the report show how they were used to address a range of assessment challenges.