There certainly is not a magic bullet when it comes to comprehensive and efficient research assessment, whether in the humanities or STEM fields. Publisher prestige currently influences tenure assessments in the humanities, as do journal names in the life sciences.
DORA community interviews provide an opportunity for supporters to discuss innovation in research assessment and to better understand how to initiate effective change in local communities. This time we are talking with Janet Halliwell about the 2017 report from the Federation of Humanities and Social Sciences in Canada that looks at assessing impact in those disciplines.
Researchers should not be evaluated based on factors outside of their control. While hiring, promotion, and tenure decisions in academia are in principle based on the merit of scholarly contributions, implicit biases influence who we hire and who we promote. Even if they are unintentional, biases have consequences.At the AAAS meeting in Washington, DC, this February, we explored approaches to addressing bias and increasing the diversity of the academic workforce in the session, “Academic Research Assessment: Reducing Biases in Evaluation."
Across the world, institutions and funders experience similar challenges in hiring, promotion, and funding decisions. How do we define research quality? What criteria should be included in researcher assessments? While some challenges might be shared, the academic ecosystem can vary from region to region.
In many ways, 2018 was a groundbreaking year for the San Francisco Declaration on Research Assessment (DORA). It marked the fifth anniversary of the declaration’s release, and we were re-invigorated with a newly formed steering committee, chaired by Prof. Stephen Curry, and a community manager, Dr. Anna Hatch, all determined to effect real change in the scholarly community.
DORA community interviews provide an opportunity for supporters to discuss good practices in research assessment and to better understand how to initiate effective change in local communities. Our first interview of 2019 is with the Open Access Advisor at CLACSO, Dominique Babini, and the Executive Director of Redalyc, Arianna Becerril García.
For the past five years, the Declaration on Research Assessment (DORA) has been a beacon illuminating the problems due to the excessive attention paid to journal metrics and pointing the way to improvements that can be made by all stakeholders involved in evaluating academic research and scholarship. Researchers, funders, universities and research institutes, publishers and metrics providers have all committed – at a minimum – not to “use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”
As 2018 comes to a close, we reflect on the past year and the progress the community made to improve the way we evaluate research and researchers.
All authors make unique contributions to a piece of work that cannot be articulated by looking at an author list. For the increasingly rare single-author publication, it is clear who contributed what to the article. However, multi-author publications are common and, in this case, it is not so clear who did what.
In academia, the bar for success continues to get higher for early-career researchers who are looking for faculty positions and grant funding. On November 4, we sat down with Prof. Christopher Jackson from Imperial College London to hear his perspective on the incentive structure in academia and what we can do better.