There certainly is not a magic bullet when it comes to comprehensive and efficient research assessment, whether in the humanities or STEM fields. Publisher prestige currently influences tenure assessments in the humanities, as do journal names in the life sciences.
Miscellaneous
Breaking habits: reducing bias in hiring, promotion, and tenure decisions
Researchers should not be evaluated based on factors outside of their control. While hiring, promotion, and tenure decisions in academia are in principle based on the merit of scholarly contributions, implicit biases influence who we hire and who we promote. Even if they are unintentional, biases have consequences.At the AAAS meeting in Washington, DC, this February, we explored approaches to addressing bias and increasing the diversity of the academic workforce in the session, “Academic Research Assessment: Reducing Biases in Evaluation.”
2018 in review—the top 10 advances in research assessment
As 2018 comes to a close, we reflect on the past year and the progress the community made to improve the way we evaluate research and researchers.
Incorporating open access activities as explicit criteria in researcher assessment
Implicit evaluation criteria can heavily influence researchers’ decisions. But as a global academic community, we can reconcile our priorities with how we evaluate researchers by modifying the standards we use in research evaluations.
#sfDORA Interviews: Research Assessment and Academia
To mark DORA’s fifth anniversary, we are celebrating with an interview series that focuses on implementing good practices in research assessment. We are pleased to announce our fourth interview is with Prof. Christopher Jackson, Equinor Professor of Basin Analysis, Department of Earth Science & Engineering at Imperial College. He will answer questions about how researchers can help change the culture of research assessment. Participants will have the opportunity to ask questions at the end of the interview.
#sfDORA Interviews: Research Evaluation and Publishing
In celebration of DORA’s fifth anniversary, we are hosting a four-part interview series that focuses on the implementation of good practices in research assessment. We are pleased to announce our third interview is with Sir Philip Campbell, Editor-in-Chief of Springer Nature. He will answer questions about the role of publishing in research evaluation. There will be an opportunity for participants to ask questions.
#sfDORA Interviews: Research Evaluation and Funding Decisions
We are pleased to announce our second interview is with Dr. Shahid Jameel, Chief Executive Officer of the Wellcome Trust/DBT India Alliance. He will answer questions about his experience with research evaluation in the life sciences and about making funding decisions at the India Alliance. Participants will have an opportunity to ask questions at the end of the interview.
Simple Questions, Big Insights: Charité Uses Bio-sketch Questions to Recruit Faculty
Charité asks candidates not just to list their top papers, but also to explain their role in each one, and justify why each constitutes a significant advance to research or clinical practice. This helps committees to see beyond brand name journals, and into the specific skills individuals offer. The portal requires applicants provide a short summary describing how they have pursued the objectives of Open Science including preregistration of studies and publication of negative results. They also outline important collaborative projects, and interactions with stakeholders in industry, patient care, and policy. Questions like these provide insight into individuals’ abilities and personal context, which neither publication lists nor journal impact factors can provide.
DORA Celebrates Five Years!
To celebrate a successful five years, we are kicking-off a live-interview series to hear firsthand from individuals who are improving research assessment in their communities.
February Debut: DORA Is Represented at Peer Review Meeting
Publishing is a significant part of a researcher’s career and essential for scientific success. However, biases built into peer review and research assessment create an uneven playing field.