Narrative CVs: How do they change evaluation practices in peer review for grant funding?

Judit Varga & Wolfgang Kaltenbrunner Research on Research Institute (RoRI); Centre for Science & Technology Studies, Leiden University.

Contact: w.kaltenbrunner@cwts.leidenuniv.nl

This blog post reports some preliminary findings from a project designed to investigate the evaluative use of narrative CVs. When funding organizations organize review panels to assess grant applications, reviewers need to agree on what constitutes “quality” and how to compare different applicants and their submissions. Historically, reviewers have tended to facilitate this process by making use of quantitative metrics, such as the number of publications or the prestige of the journals and institutions associated with an applicant. These indicators, theorized as so-called “judgment devices” by researchers like Musselin (2009) and Hammarfelt & Rushforth (2017), help reduce the complexity of the decision-making involved in comparing candidates and their suitability to carry out projects by breaking them down to a more simple, quantitative comparison.

However, there is growing concern that relying too heavily on these traditional markers might be doing more harm than good. By focusing on numbers and proxies for academic prestige, reviewers may be losing sight of the achievements and quality of work of an applicant in a broader sense. Narrative CVs are designed to encourage them to consider the achievements and competence of a candidate in suitable detail and in the context of their proposed projects. At the same time, very little is known about the practical effects and real-word use of narrative CVs by reviewers in funding panels.

To remedy this, researchers at the Research on Research Institute (RoRI) have co-designed a research project in collaboration with the Dutch Research Council (NWO), the Swiss National Science Foundation (SNSF) and the Volkswagen Foundation. The project is currently ongoing and draws mainly on participant observation in consecutive review panel meetings as well as interviews with reviewers. The quotes presented in this blogpost document discussions within peer review panel meetings at NWO and SNSF.

Multiplying forms of excellence

Our findings so far suggest that the introduction of narrative CVs can trigger debates about the nature of scientific excellence in review panels. We encountered multiple moments where the use of the narrative CV format prompted reviewers to gradually broaden the range of achievements they valued in applicants, partly depending on the outlook of the respective project proposals. In one representative situation, a reviewer in the SNSF sciences panel expressed their surprise at the fact that the applicant had foregrounded the collaborative nature of their work in the CV, instead of focusing on publications. The reviewer initially scored the applicant lower as a result:

“One surprising thing about achievements [the narrative aspects of the SNSF CV]: there are significant ones, from the postdoc there are publications, but somehow [they are] not described as an achievement, I was wondering why (…) Maybe the candidate thought it was better to emphasize other achievements, like the collaborative nature of work, but anyway this is why I gave [a lower score].”

Later, following a discussion among panelists about how to interpret the proposal and the submitted narrative CV, another reviewer began to explicitly characterize the applicant’s profile as a ‘team player’. A subtle but important shift appeared to have taken place in the evaluative reasoning: Rather than assessing the applicant against a singular default ideal of a scientist whose standing can be inferred from the quantity of high-impact publications, a reviewer introduced a frame of reference where collaborative qualities and the ability to facilitate joint work in a laboratory context were legitimate criteria. The question then was, is this the right profile for the proposed grant and the research project?

“In conclusion, a strong candidate, I was a bit too harsh, especially on the project […] the profile is a bit ambiguous to me, but I’m happy to raise [the points] […] I think the candidate is a team player in the lab, you can interpret it positively or negatively but that’s the profile.

This situation is a particularly clear example of a dynamic that we recurrently observed throughout all of our case studies, namely a gradual pluralization of the notion of excellence over the course of the panel meetings.

Resistance

The above example illustrates evaluative learning in the form of rethinking publication-centric assessment criteria in light of narrative CVs and related guidelines. Yet on a number of occasions, some reviewers explicitly doubled down on those more ‘traditional’ criteria. For example, NWO instructed reviewers to omit naming concrete journals in which an applicant has published, to avoid that they infer the quality of a publication from the perceived prestige of the venue in which it had been published. In line with this, in the first review round of the NWO social sciences panel, reviewers did not mention any journals by name. Yet in the second round, one reviewer evoked journal prestige twice when assessing (two different) applicants.

“As for the applicant, I can’t judge the quality of publications, but the applicant published in Nature Genetics, maybe someone can tell me if it’s good but “everything with Nature sounds very good to me” [laughs a bit], I was very impressed with the candidate.”

When discussing another applicant, the same reviewer again made a reference to the applicant’s publications in prestigious journals as a proxy for their quality:

“Quality candidate. 5 publications in Nature Scientific Reports and other prestigious journals. […]

This comment sparked some confusion, as reviewers failed to locate the publication mentioned. After a while, the NWO program officer who helped chair the panel meeting cautioned that the perceived prestige of the publication venue should not be taken into account as a factor in the evaluation in the first place. Yet rather than giving up, the reviewer noted this comment with a disapproving gesture and continued the effort to locate the publication in question.

We propose that in order to make sense of such situations and devise practical strategies for handling them in future panel meetings, it is important to disentangle the different motivations reviewers might have for doubling-down on publication-centric evaluation criteria, even when they are explicitly cautioned not to use them. Sometimes, they might do so simply because they feel it makes sense in the context of a given application, for example projects aiming primarily for traditional academic impact. Yet on other occasions, resistance to the narrative format might be better understood as a response to what reviewers perceive as an unjustified intervention by funders and other reform-minded actors. After all, narrative CV formats can be seen not simply as a well-intentioned attempt to improve the fairness of evaluative decision-making in peer review, but also a threat to the autonomy of reviewers and academic communities to define notions of quality.

Story-telling skills as a new bias?

An important concern for many observers appears to be the emphasis narrative CV formats place on writing skills and the ability or willingness to present oneself in the best possible light. These cultural competences and inclinations may be unequally distributed among different groups of applicants, for example at the disadvantage of applicants with working class backgrounds, female applicants, or applicants from different cultural backgrounds. Yet typically, discussions about bias this may create focus solely on the input on the applicant’s side, and they implicitly presuppose that a highly positive self-representation is always a good thing. We instead found that reviewers may react negatively when they feel that applicants exaggerate their achievements. During panel meetings, reviewers flagged cases where they thought applicants exaggerated their achievements.

For example, in the social sciences panel of SNSF, a reviewer felt that an applicant had grossly overstated their achievements:

“[The Scientific_Chair reading the evaluation of a Reviewer]: [The Reviewer] had problems with the tone of the CV as well as the proposal, [they contained] self aggrandising statements about having invented a new field.

In another situation, another reviewer explicitly admitted “to be turned off” by an applicant using similarly hyperbolic language in their narrative CV, noting that it was “not grounded in science.”

Conversely, a situation we observed in the natural sciences panel shows that reviewers do appreciate enthusiastic narrations, but the fundamental requirement is for the narratives to be credible:

“Reviewer: (…) also the description of academic career is credible and enthusiastic.”

In sum, whilst narrative CVs might require applicants to write more than traditional CVs, this does not mean that they will appreciate academic self-aggrandizement or inflationary rhetoric. Instead, it appears that narrative elements place the emphasis on a new form of credibility in the relation between biographical self-representation and the achievements of a peer, which we suggest requires continued study.

Conclusions

This blog post documents in equal measure the success and challenges of narrative CV formats, and also the demand for more research on its practical use. It is clear even on the basis of our preliminary observations that narrative CVs do on many occasions stimulate productive reflections on the meaning of excellence in specific contexts, thus multiplying its meaning and perhaps contributing to challenging the terminology of excellence. We also feel that attention to nuance is crucial for understanding resistance to narrative CVs. Some forms of resistance might well provide input for further development of narrative CV formats. Where resistance is more related to a (perceived) struggle about reviewer autonomy, a different type of response will be required – one that addresses questions of power relations between scientists and institutions and funders in a more explicit way. Our finding that reviewers tend to react negatively to self-aggrandizing language in narrative CVs in turn caution us that evaluation reform is a moving target that can only be studied as it unfolds.

As should have become clear, narrative CVs are not an easy fix for peer review. They instead prompt reviewers and the  institutions who introduce them to ask fundamental questions of fairness and fit of quality criteria in peer review afresh. While not ‘efficient’ in a practical sense, we feel that this disruption to established routines of evaluative problem-solving is a crucial benefit in its own right.

The academic community can benefit from narrative CVs, particularly if the questions and complexities they raise are embraced as opportunities for discussions. For example, the findings presented in this blogpost signal opportunities to further discuss notions of  excellence, values in  academic culture and governance, training about narrative CVs for applicants, and CV design in light of the potential biases the new format may introduce. However, this process requires careful management, drawing on curious and innovative ideas for academic futures: In the absence of this, given the time-constrained nature of review meetings and academic life, it can be all too easy to glide over the opportunities and challenges afforded by narrative CVs.

References

Hammarfelt, B. & Rushforth, A.D. (2017). Indicators as judgment devices: An empirical study of citizen bibliometrics in research evaluation. Research Evaluation 3(1): 169–180.

Musselin, C. (2009). The Markets for Academics. New York: Routledge.

Share This

Copy Link to Clipboard

Copy