The Latin American Forum on Scientific Evaluation (FOLEC) is a regional space for debate on the policies and practices of the evaluation of scientific work as well as the long-term implications of evaluation processes. Through FOLEC, the Latin American Council of Social Sciences (CLACSO) seeks to change the system by recognizing and rewarding the open, common and public domain of knowledge and its connection with democratizing and sustainable models of science that are committed to solving societal problems.
The Wellcome Trust has published draft guidance for the research organisations we fund on implementation of the core principles of DORA. It aims to help organisations develop and adopt meaningful changes to research assessment practices that instil the buy-in and trust of their staff, and to encourage them to be proactive in reporting their progress and sharing their learning.
The Dutch Research Council (NWO) is piloting a narrative CV format in the Veni scheme, its major funding instrument for early career researchers. The format advances showcasing diverse types of talent and encourages assessment of quality rather than quantity.
All authors make unique contributions to a piece of work that cannot be articulated by looking at an author list. For the increasingly rare single-author publication, it is clear who contributed what to the article. However, multi-author publications are common and, in this case, it is not so clear who did what.
Many journals play a significant role in regional academic communication in Latin America. The research they publish has profound societal impacts that improve the quality of life in the local community. We fear these journals are at risk of disappearing, because their sustainability increasingly relies on where they are ranked within Web of Science or Scopus.
As a graduate student, I signed DORA to speak out against the misuse of the impact factor. Even with my career before me, I knew that something about the way research was being evaluated in hiring, promotion, and funding decisions needed to change. I had wanted to be evaluated based on the quality of my research, how it was re-used by my colleagues around the world, and how others shared and discussed it online.
Preprints are good for science and the evaluation of scientists. They remove barriers to the dissemination of work among the scientific community, promoting earlier error detection, increased feedback and the potential for collaboration, faster transfer of ideas among labs and fields, and good practices in evaluation (reading a paper rather than making a snap judgement based on where it’s published).