As 2018 comes to a close, we reflect on the past year and the progress the community made to improve the way we evaluate research and researchers.
Implicit evaluation criteria can heavily influence researchers’ decisions. But as a global academic community, we can reconcile our priorities with how we evaluate researchers by modifying the standards we use in research evaluations.
Charité asks candidates not just to list their top papers, but also to explain their role in each one, and justify why each constitutes a significant advance to research or clinical practice. This helps committees to see beyond brand name journals, and into the specific skills individuals offer. The portal requires applicants provide a short summary describing how they have pursued the objectives of Open Science including preregistration of studies and publication of negative results. They also outline important collaborative projects, and interactions with stakeholders in industry, patient care, and policy. Questions like these provide insight into individuals’ abilities and personal context, which neither publication lists nor journal impact factors can provide.
To celebrate a successful five years, we are kicking-off a live-interview series to hear firsthand from individuals who are improving research assessment in their communities.
Publishing is a significant part of a researcher’s career and essential for scientific success. However, biases built into peer review and research assessment create an uneven playing field.