Research evaluation has always been complex. Every scholarly discipline is different and requires its own approach to assessment. But finding clear and objective ways of comparing candidates is a challenge for every field.
Ulrich Dirnagl, founding Director of the QUEST Center for Transforming Biomedical Research at the Berlin Institute of Health and Director of Experimental Neurology at Berlin’s Charité University Hospital, used Twitter to highlight a new applicant portal that is being used in the hiring process for all professorships at Charité. Applicants now answer questions related to a variety of scholarly outputs including open science, team science, and stakeholder engagement.
To describe scientific contributions, Charité asks candidates not just to list their top papers, but also to explain their role in each one, and justify why each constitutes a significant advance to research or clinical practice. This helps committees to see beyond brand name journals, and into the specific skills individuals offer. The portal requires applicants provide a short summary describing how they have pursued the objectives of Open Science including preregistration of studies and publication of negative results. They also outline important collaborative projects, and interactions with stakeholders in industry, patient care, and policy. Questions like these provide insight into individuals’ abilities and personal context, which neither publication lists nor journal impact factors can provide. Additionally, to enable a fair comparison of candidates with varying levels of familial responsibility, applicants can share how much time they have spent away from work caring for family.
Asking these types of questions, however, is half of the battle. Whether or not a committee considers all these factors in final decisions is hard to know. But what is possible, Dirnagl says, is to ensure that committee discussions do not focus too much on one aspect of an application. That is why Miriam Kip, Good Evaluation Practice Officer at the QUEST Center, now sits in on hiring deliberations as a neutral party—to encourage balanced discussions for fair evaluations.
There is a belief, Dirnagl iterates, that faculty recruitment on the basis of publications in high Journal Impact Factor (JIF) journals is critical for ensuring that an institution will continue to receive attention for similar publications, which in turn ensures the renewal of grants. Some research institutes fear changing internal systems of evaluation if external systems of funding are still aligned around the JIF. Dirnagl believes that looking for ways we can improve our hiring, and in general, our incentives is key to changing the biomedical enterprise.
Changing systems of hiring or reward presents many challenges, and Dirnagl says, “our intention cannot be to completely change the way it is done now … and if we tried to, we’d get a lot of push-back.” Practices are engrained in institutional culture, and alterations are dependent on approval from numerous parties. It took the better part of a year for Dirnagl and his team at the QUEST Center to go from creating the application portal to being allowed to implement it. They had the strong support of Charité’s dean, Prof. Dr. Axel Radlach Pries, which proved helpful in gaining buy-in from the research commission, and, finally, an approval vote from the governing board of the medical faculty last autumn.
Gaining allies in the pursuit of research evaluation reform is critical. “We don’t want to alienate people; we need to bring them in, explain what we are doing,” Dirnagl emphasizes. Moreover, empirical evidence that any given change will produce the desired effect is still lacking – something that Dirnagl hopes to change soon. He sees the work of institutional change, and the meta-research the QUEST Center conducts about it, as a part of a “behavioral change experiment, where our laboratory is the Charité,” he says. “There are so many screws you can turn, and in the end, some will push it in the right direction, and some will push it the wrong direction. We are trying to make this black-box system a little bit more transparent. And we hope other institutions will follow.”
Helen Sitar is a Science Policy Programme Officer at EMBO.