Cross-funder action to improve the assessment of researchers for grant funding

Funding organizations signal what is valued within research ecosystems by defining the criteria upon which proposals and applicants are assessed. In doing so, they help to shape the culture of research through the projects and people they support. Focusing on a limited set of quantitative criteria favors a narrow view of research and disincentivizes creativity and innovation. To counter this, many funding organizations are shifting to a more holistic interpretation of research outputs and achievements that can be recognized in the grant evaluation process through the use of narrative CV formats that change what is visible and valued within the research ecosystem. As communities of practice in research and innovation funding, the San Francisco Declaration on Research Assessment (DORA) funder’s group and Funding Organisations for Gender Equality Community of Practice (FORGEN CoP) partnered to organize a workshop focused on optimizing the use of narrative CVs, which is an emerging method used by research funders to widen the research outputs that can be recognized in research assessment.

DORA’s funder discussion group was launched in March 2020 to enable communication about research assessment reform. By learning from each other, funding organizations can accelerate the development of new policies and practices that lead to positive changes in research culture.

The focus on implementing narrative CV formats for grant funding grew organically within the discussion group. In 2019, the Royal Society in the United Kingdom introduced the Résumé for Researchers as an alternative to the traditional CV to recognize the diversity of research contributions through a concise, structured narrative. At the same time, a handful of research funders began to experiment with the use of narrative CVs in grant evaluation. During the first meeting of the DORA’s funders group, the Dutch Research Council and the Swiss National Science Foundation shared information about their pilot projects to implement narrative CVs. At subsequent meetings in 2020 and 2021, the Health Research Board Ireland, Luxembourg National Research Fund, and Science Foundation Ireland also shared updates on their implementation of narrative CVs.

FORGEN CoP, led by Science Foundation Ireland, was established in October 2019 and aims to share knowledge and best practice on gender equality in research and innovation funding. Mitigating gender bias within the grant evaluation process was identified as a primary area of focus.

Inequalities in the grant evaluation processes are more prevalent when assessments are focused on the researcher, as opposed to the research. With the conversation shifting to the assessment of people, the discussion naturally led to the use of narrative CVs and their assessment. Narrative CVs can be used as a method to move away from assessing productivity (quantity over time) to research achievements, focusing on the quality and relevance of these achievements in relation to the research proposed. They may also support researchers with non-linear career paths or career breaks by removing the focus on productivity which can be affected by periods of leave for care-giving responsibilities. During subsequent meetings, members shared their experiences implementing narrative CVs and their assessment methods. In a special FORGEN CoP seminar focused on how funders could mitigate the long-term gendered impacts of the COVID-19 pandemic, narrative CVs were highlighted as a method to address potential gender inequalities and/or a reduction in productivity that some researchers experienced throughout the pandemic.

During September 2021, DORA and the FORGEN CoP joined forces to address current knowledge gaps for the use of narrative CVs in grant evaluation and to improve their usefulness as a tool for responsible research assessment. We jointly organized a workshop, facilitated by Dr. Claartje Vinkenburg, which was convened as two events to enable global attendance. More than 120 participants from over 40 funding organizations and 22 countries joined us to explore strategies to mitigate potential biases of narrative CVs and monitor their effectiveness in the grant funding process. During these events, we heard from organizations implementing narrative CVs and experts who spoke about the influence of gender bias and language on research assessment. Researchers spoke to us about decision-making and process optimization for grant funding. In breakout sessions, we were able to identify useful areas of alignment for funding organizations and what studies are needed to gather evidence to improve the implementation of narrative CVs.

We are delighted to share a short report that captures our learnings from the workshop and identifies a course of action to improve narrative CVs as a tool for assessment. The workshop and report provide a foundation for the optimization of narrative CV formats upon which we plan to build.

We compiled a list of resources leading up to and during the workshop to help us understand the opportunities and challenges of using narrative CVs for grant funding, such as how bias might influence their evaluation (see below for the list of resources). As narrative CVs can be used for purposes other than grant funding, including the hiring and promotion of research and academic staff, we foresee that this report and the collated resources will also be useful to the wider academic community.

Narrative CVs have been the focus of other international funders’ fora in tandem with the work of DORA and FORGEN CoP. The Swiss National Science Foundation and the Open Researcher and Contributor ID (ORCID) organization established the CV Harmonization Group (H-group) in 2019 to improve academic CVs as a tool for responsible research assessment. In 2021, UK Research and Innovation (UKRI) launched a Joint Funders Group, an international community of practice of research funders, to develop a shared approach for the adoption of the Resume for Researchers, which UKRI also plans to implement.

Research assessment reform requires collective action, which is why DORA and FORGEN CoP have now partnered with the Swiss National Science Foundation and UKRI to build on this work. We are excited to announce that a second workshop for research funders is planned for February 2022 that aims to identify shared objectives for the use of narrative CVs in grant funding and determine how funding organizations can work collaboratively to monitor their effectiveness. If you work at a public or private research funding organization and would like to participate, please email info@sfdora.org.

Narrative CVs reduce emphasis on journal-based indicators, allow for the recognition of a variety of research contributions, and help to shift the focus from metricized research productivity to research achievements. Through the activities already completed and those planned, we aim to develop a path to collective action on the optimization and application of narrative CVs as a tool for responsible research assessment.

Anna Hatch is the DORA program director (info@sfdora.org)

Rochelle Fritch is the leader of FORGEN CoP and Scientific Programme Manager in Science Foundation Ireland (diversity@sfi.ie)

Resources

Speakers are highlighted in bold

Narrative CVs: Supporting applicants and review panels to value the range of contributions to research
Elizabeth Adams, Tanita Casci, Miles Padgett, and Jane Alfred

Measuring the invisible: Development and multi-industry validation of the Gender Bias Scale for Women Leaders
Amy B. Diehl, Amber L. Stephenson, Leanne M. Dzubinski, David C. Wang

Quality over quantity: How the Dutch Research Council is giving researchers the opportunity to showcase diverse types of talent
Kasper Gossink-Melenhorst

Getting on the same page: The effect of normative feedback interventions on structured interview ratings
Christopher J. Hartwell, and Michael A. Campion

The Structured Employment Interview: Narrative and Quantitative Review of the Research Literature
Julia Levashina, Christopher J. Hartwell, Frederick P. Morgeson, Michael A. Campion

The predictive utility of word familiarity for online engagements and funding
David M. Markowitz and Hillary C. Shulman

What Words Are Worth: National Science Foundation Grant Abstracts Indicate Award Funding
David M. Markowitz

Engaging Gatekeepers, Optimizing Decision Making, and Mitigating Bias: Design Specifications for Systemic Diversity Interventions
Claartje J. Vinkenburg

Selling science: optimizing the research funding evaluation and decision process
Claartje J. Vinkenburg, Carolin Ossenkop, Helene Schiffbaenker

Are gender gaps due to evaluations of the applicant or the science? A natural experiment at a national funding agency
Holly O Witteman, Michael Hendricks, Sharon Straus, Cara Tannenbaum

Studying grant decision-making: a linguistic analysis of review reports
Peter van den Besselaar, Ulf Sandström, Hélène Schiffbaenker

Gender, Race, and Grant Reviews: Translating and Responding to Research Feedback
Monica Biernat, Molly Carnes, Amarette Filut, Anna Kaatz

When Performance Trumps Gender Bias: Joint vs. Separate Evaluation
Iris Bohnet, Alexandra van Geen, Max Bazerman

Initial investigation into computer scoring of candidate essays for personnel selection
Michael C. Campion, Michael A. Campion, Emily D. Campion, Matthew H. Reider

How Gender Bias Corrupts Performance Reviews, and What to Do About It
Paolo Cecchi-Dimeglio

Inside the Black Box of Organizational Life: The Gendered Language of Performance Assessment
Shelley J. Correll, Katherine R. Weisshaar, Alison T. Wynn, JoAnne Delfine Wehner

The Gender Gap In Self-Promotion
Christine L. Exley, Judd B. Kessler

How Do You Evaluate Performance During a Pandemic?
Lori Nishiura Mackenzie, JoAnne Wehner, Sofia Kennedy

Why Most Performance Evaluations Are Biased, and How to Fix Them
Lori Nishiura Mackenzie, JoAnne Wehner, Shelley J. Correll

The Language of Gender Bias in Performance Reviews
Nadra Nittle

Exploring the performance gap in EU Framework Programmes between EU13 and EU15 Member States
Gianluca Quaglio, Sophie Millar, Michal Pazour, Vladimir Albrecht, Tomas Vondrak, Marek Kwiek, Klaus Schuch

How Stereotypes Impair Women’s Careers in Science
Ernesto Reuben, Paolo Sapienza, Luigi Zingales

Grant Peer Review: Improving Inter-Rater Reliability with Training
David N. Sattler, Patrick E. McKnight, Linda Naney, Randy Mathis

Share This

Copy Link to Clipboard

Copy