Background Reading

Driving Institutional Change for Research Assessment Reform

October 21 – 23, 2019

1. Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

McKiernan EC, et al. Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations (2019).

To better understand how research institutions use the journal impact factor (JIF) for researcher evaluations, Erin McKiernan and colleagues analyzed review, promotion, and tenure (RPT) documents from universities across the United States and Canada.  Table 1 shown below summarizes their key findings. Overall, 23% of the universities sampled mention the JIF, and this number increases to 40% for R-type institutions. Eighty-seven percent of institutions had at least one supportive mention of using the JIF for evaluation purposes. None of the mentions prohibit its use.

The authors found inconsistencies in RPT documents between academic units of the same university. For example, one unit may caution the use of the JIF while another one is supportive. There is also variability in what institutions are measuring with the JIF. In addition to impact, the JIF can also be used to measure quality, importance, significance, prestige, reputation, and status.

The analysis was limited to terms that were highly similar to JIF. It did not include indirect but probable references to the JIF, such as top-tier journal, prominent journal, or leading journal.  This study provides a benchmark for future studies to compare and contrast how the JIF is used.

2. Bias against novelty in science: a cautionary tale for the users of bibliometric indicators

Wang J, and Veugelers R, and Stephan, PE. Bias against Novelty in Science: A Cautionary Tale for Users of Bibliometric Indicators (2017).

Research assessment practices that rely on citation counts and journal impact factors may inadvertently discourage academics from pursuing explorative, novel research approaches. In this study, Wang and colleagues dissect how novelty affects research impact.

Novel approaches can lead to substantial scientific advances. But not always. Novel papers have a higher mean and a larger variance in their citation performance, illustrating the risky nature of explorative research. Despite the uncertainty, however, the authors find novel approaches can make significant scientific progress. Highly novel research articles are more likely:  

  • to be a top 1% highly cited paper over time,
  • to result in highly cited follow-up studies,
  • and to be cited by a wider set of disciplines.

However, recognition takes time. Novel papers are less likely to be top cited when looking at shorter citation windows. Highly novel papers are significantly more likely to become top cited after four years, well outside the 2-year period over which JIF is calculated; or nine years for moderately novel papers. Yet the time required to realize the impacts of novel research is often incompatible with the time frame for faculty appointments or review, promotion, and tenure (RPT) decisions.

The authors show that novel papers are more likely to be published in journals with impact factors that are lower than expected. Current research assessment policies and practices favor individuals who publish their work in high impact factor journals, which could discourage innovation.

A short summary of the article can be found here


3. Aligning practice to policies: changing the culture to recognize and reward teaching at research universities

Dennin M, et al. Aligning Practice to Policies: Changing the Culture to Recognize and Reward Teaching at Research Universities (2017).

Scholarship is not limited to publishing papers. Many university promotion and tenure policies include language that labels teaching as a valuable component of scholarship, but often without clear criteria of how contributions to teaching should be assessed beyond the use of student evaluations, which are known to be biased against women and underrepresented groups.  In many cases, research excellence is perceived to compensate for teaching and academic service.

Here, the authors suggest general guidelines for the evaluation of teaching and present three approaches that universities are taking to evaluate and reward teaching:

  • Three Bucket Model of merit review (University of California, Irvine; Figure 1)
  • Evaluation of Teaching Rubric (University of Kansas)
  • Teaching Quality Framework (University of Colorado, Boulder)

4. The evaluation of scholarship in academic promotion and tenure processes: past, present, and future

Schimanski LA and Alperin JP. The evaluation of scholarship in academic promotion and tenure processes: Past, present, and future (2018).

In this review, Schimanski and Alperin name three pressing challenges in review, promotion, and tenure:

  • The increased value placed on research comes at the expense of teaching and service contributions.
  • Proxy measures of prestige are often used as shortcuts to measure the quality of a research article.
  • The integration of article-level metrics into RPT processes is too slow.

They also identify a number of other issues. Evaluation for RPT is typically divided into research, teaching, and service. Service is the least valued of the three, cannot compensate for the other two, and diverts time away from the most important contribution of research. To fulfill university diversity requirements, underrepresented groups spend more time serving on committees. But this may be a disadvantage for RPT. Women are also more likely to spend time on service to the detriment of their careers.

Another challenge is level of detail provided in RPT guidelines. Ambiguous language provides a certain degree of flexibility. But there is a cost too—candidates may not always be assessed using the same standards.


5. How significant are the public dimensions of faculty work in review, promotion and tenure documents?

Alperin JP, et al. How significant are the public dimensions of faculty work in review, promotion, and tenure documents? (2019).

Public dimensions of faculty work are often neglected in RPT guidelines, despite regular references to terms and concepts related to “public” and “community.”  While 87% of institutions mention “community” and 75% mention “public,” they do not typically point to the public dimensions of faculty work. For example, community is often used in relation to the academic community.

Furthermore, terms and concepts related to “public” and “community” are associated with academic service, which is not a valued aspect of RPT. And the highly valued traditional outputs like research papers largely disregard the public dimensions of research. How we measure the success of research papers is at odds with the public dimensions of faculty work too. Traditional bibliometric indicators, including journal impact factors, encourage uptake within a scholarly discipline rather than the broader community.

RPT guidelines can discourage researchers from publishing their work in open access venues. Here, the authors found the term “open access” is explicitly mentioned by 5% of the institutions sampled, and all of the mentions are cautionary.


6. Strategy for culture change

Nosek, B. Strategy for culture change (2019).

Culture change requires a comprehensive strategy. Efforts aimed at changing the behavior of individuals can fall flat if social and cultural systems are not addressed. These systems guide individual behavior, so if systems do not change, behavior won’t either. In this blog, Brian Nosek describes the strategy the Center for Open Science is taking to change the research culture to promote openness, integrity, and reproducibility. It requires five levels of progressive intervention (see image below). This strategy and others can guide our thinking about research assessment reform.

7. Fewer numbers, better science

Benedictus R, et al. Fewer numbers, better science (2016).

In this comment, Rinze Benedictus and Frank Miedema describe the strategy used by the University Medical Center (UMC) Utrecht to develop research assessment policies that focus on scholarly contributions rather than publications. Community engagement was a key factor in cultivating culture change at the university.  Researchers helped define new assessment standards through a series of internal discussions. PhD students, principal investigators, professors, and department heads were all invited to participate. Reports from these meetings and additional interviews were published on UMC Utrechts’ internal website to engage the wider academic community. But change takes time. Once the discussion phase was complete, policies were developed over the next year based on the input that was received. A few years on, the evaluative approach focusing on contributions was used in a formal institutional evaluation. For professors and associate professors a portfolio-based assessment has become standard.