By Haley Hazlett, PhD Candidate, Dartmouth College
For the past eight years the DORA has advocated that research institutions reevaluate their research assessment practices for recruitment, promotion, and funding decisions. To inform the evaluation of scientific productivity, DORA encourages the use of explicit criteria beyond popular bibliometrics like the Journal Impact Factor (JIF) or H-index. These criteria include a range of output measures, such as the generation of new software and datasets, research impact on a field, transparency, training early-career researchers, and influence on policy.
It can be encouraging to find that academic institutions have taken steps to improve research assessment practices in their publicly available protocols for hiring and promoting. Small but important changes can be seen in these protocols: requiring applicants to exclude JIF on their CVs; requesting that faculty promotion committees pay attention to how the candidate has progressed with scholarship and teaching, in addition to research; and requesting that faculty promotion committees review the DORA document “Rethinking Research Assessment: Ideas for Action.” Access to these protocols gives graduate students the opportunity to see how their institution is working to improve research culture through academic assessment, specifically in hiring and promotion. It also allows students within an institution to compare their experiences against purported institutional priorities. In doing this, students may find a disconnect between institutional standards of research assessment and how research assessment is being taught.
Much of the graduate student education on research assessment is based on independent observational learning. Whether they intend to or not, students learn what aspects of research to value by observing the conduct of faculty and senior peers. Journal clubs, in which students select a primary research paper to discuss with their peers, are an important aspect of graduate school in the biomedical sciences. Journal clubs are meant to teach students how to critique and assign merit to research. Faculty members often attend journal clubs to offer professional insight, guide the discussion, and teach students how to assess the merits and weaknesses of the paper. Yet, despite the institutional protocols for research assessment in hiring and promotion, students are likely familiar with faculty members in journal club repeating this refrain: “Why was this paper published in Nature? Does this paper deserve to be in Science?” Discussion questions like these are commonplace, even though they are predicated on the assumption that quality correlates with JIF. Lessons about critical assessment of research quality are taking place within the context of impact factor. Students quickly learn that impact factor and prestige are important output indicators. This leads to habits that persist throughout the career of the student, such as prioritizing by JIF which journal to submit a manuscript to; dismissing research published in low JIF, open access, or “specialty” journals; and generating assumptions about the value of research published in high JIF journals without critically considering the quality of the research itself.
While these experiences may seem anecdotal, studies published in 2010 and 2016 found that graduate students and faculty perceive JIF as a critical metric by which their scientific contributions are measured. This is no accident. Despite efforts in the last decade to effect change in research assessment practices, the current climate of research culture is still teaching the next generation of scientists to place high value on JIF and journal prestige. If an institution is genuinely attempting to change its academic assessment practices, then it needs to start at the source. It needs to start with the next generation of faculty. Nothing will truly change if early-career faculty are ingrained with an antiquated veneration for JIF and journal name recognition.
Faculty members can take several actions to improve graduate student education on research assessment. Faculty members can teach by example, encouraging students to focus on the merits of the work itself instead of the journal in which the work is published. In addition to refocusing the framework of the journal club, faculty members should actively correct student assumptions about research quality based on JIF and emphasize that journal reputation should not be considered as a metric to inform the quality of an individual article within that journal. An excellent example of this lesson may be found in the multiple research papers that have been rejected from high JIF journals before going on to win the Nobel Prize.
Mentors should encourage their graduate students to establish or join journal clubs that specifically work to assess research quality while remaining impartial to journal reputation or JIF. Journal clubs like RepoducibiliTea focus on encouraging students to assess research quality and reproducibility. PREreview facilitates preprint journal clubs that allow students to learn about the newest discoveries in their field while developing critical research assessment and peer-review skills. Many graduate students feel pressured—either externally or internally—to spend every spare moment working in the laboratory. Institutions that are serious about improving research culture should make it clear that mentors need to support their student by allowing the student time for additional journal clubs.
The reward for these efforts will be a generation of young scientists with a deeper and more nuanced understanding of scientific productivity. These young scientists will carry their knowledge into grant review boards, faculty search committees, and faculty promotion committees. This will not happen unless changes are made to how graduate students are taught to assess the quality of research through peer review, instead of relying on journal-based indicators. Without change, current metrics of research assessment will persist—regardless of what institutional protocol says.
Guest blog posts reflect the opinions of the authors and not necessarily those of DORA.