Transforming assessment: Headline findings of the GRC RRA Survey

Each quarter, DORA holds two Community of Practice (CoP) meetings for research funding organizations. One meeting takes place for organizations in the Asia-Pacific time zone and the other meeting is targeted to organizations in Africa, the Americas, and Europe. The CoP is a space for funders to learn from each other, make connections with like-minded organizations, and collaborate on projects or topics of common interest. Meeting agendas are shaped by participants. If you are employed by a public or private research funder and interested in joining the Funder CoP, please find more information on our webpage or email us at info@sfdora.org

On May 29, 2025, DORA hosted its Q2 Funder Discussion Group meetings for the Asia-Pacific (AP) regions and Africa, Americas, Europe (AAE). We were delighted to welcome the Global Research Council (GRC) working group on Responsible Research Assessment, featuring Dr. Anh-Khoi Trinh, Senior Policy Advisor at NSERC, and Dr. Peter Kolarz, Head of Programmes at the Research on Research Institute (RoRI). They shared findings from their recent and plans for future work as a WG, and the results of their newly published survey report.

The GRC’s 2025 global survey, developed in collaboration with RoRI, draws insights from 50 funding agencies to map the current landscape of RRA. With nearly half of responses coming from funders in the Global South, the survey offers a rare and rich comparative view of how RRA is understood and implemented globally. The survey shows that the old ways of doing research assessment are still going strong, but a range of new criteria, indicators and processes are entering the picture. This is not a matter of the new replacing the old, but instead, the ways in which we assess research is broadening, as is the way in which we define “good” research in the first place.

The findings reveal a growing appetite for experimentation, such as through narrative CVs and more holistic indicators of quality, alongside a shared commitment to building fairer, more transparent and context-specific assessment systems. As funders seek to translate principles into practice, the importance of ongoing learning, collaboration and evidence-informed reform becomes ever more critical. 

Besides a range of other findings detailed in the ‘Transforming Assessment’ report, including an entire chapter on current and future AI-use by research funders, the survey also finds that research funders generally have a high degree of autonomy from government and academic communities when it comes to designing funding and assessment processes and defining criteria and indicators. This makes research funders critical agents of change in the transition towards more efficient, fair, equitable and responsible assessment landscapes.

We invite you to read on or watch back on this insightful meeting.

Dr. Anh-Khoi Trinh kicked off the meeting introducing the GRC working group on RRA, which aims to advance RRA practices globally by working with GRC participating funding organizations. The GRC is a virtual organization comprised of the heads of science and engineering funding agencies from around the world. The vision of the WG is to help position the GRC as a leading voice on the promotion and implementation of RRA in the international research and innovation system to help build a diverse and inclusive research culture. Their vision is delivered through four objectives: advocating for RRA globally and working towards a shared understanding; sharing practices and guidance; galvanizing support and coordinated action; and providing ongoing support and extending the knowledge base. Composed of 23 members from 21 countries, the working group is currently co-chaired by Shawn McGuirk from NSERC (Canada) and Mohammed Ahmad S. Al-Shamsi (Saudi Arabia) and is supported by a secretariat from UKRI and NSERC.

Over the past two years, the working group has developed several key resources. Building on their foundation, the 11 Dimensions of RRA, published in May 2024 to build a shared understanding, they recently launched a Case Study Booklet and accompanying Digital Library in May 2025. These case studies from funders map to the dimensions, highlighting areas like a commitment to Equity, Diversity, and Inclusion (EDI), influencing institutional policies, and assessing research contributions. They noted lower submission rates for dimensions related to global changes and impact assessment and encouraged more submissions to keep the Digital Library a “living library”. Looking ahead, Trinh shared that the working group plans to develop a self-assessment tool on RRA for funders to help them understand their position in the RRA landscape and a roadmap for implementing more responsible practices.

Dr. Peter Kolarz then presented the detailed findings from the GRC survey report, titled “Transforming Assessment: the 2025 Global Research Council survey of funder approaches to responsible research assessment”. This is the second GRC survey on RRA, expanding on a shorter one conducted in 2020. The survey, open from the latter half of 2024 into January 2025, covered areas like definitions and frameworks, assessment indicators, outputs & criteria, assessment process modifications, narrative CVs, the rise of AI, funders’ independence, and funders’ practices. 

They received 50 responses, representing a 43% response rate from GRC-engaging organizations (N=117), described as “pretty unprecedented” engagement for the GRC. The survey achieved strong geographical diversity, notably including 18 responses from the Global South (using the OECD DAC list as a proxy). This is significant as academic literature often focuses on the Global North, making these findings more inclusive and representative of a global picture.

Kolarz began the presentation of findings with some insight on funders’ autonomy from government and researcher communities, which highlighted the value of surveying funders: funders generally perceive themselves as having “quite a lot” of autonomy, especially in defining performance criteria and designing funding processes, though less so in overall priority setting. This positions funders as potentially very important change-makers in the research ecosystem.

The overall headline conclusion of the survey is that established markers and ways of doing research assessment are still going strong and are showing no sign of retreat, but that additional criteria and markers relating to RRA are increasingly being used alongside traditional methods. This suggests that the understanding of what constitutes good research and good science is broadening, with more considerations coming into the fold. This transformation is not uniform or linear across different contexts, highlighting a diversity of approaches and a need for further research to understand why RRA takes different shapes in different places.

Kolarz shared that established standard assessment criteria like feasibility, methodological rigour, ethical considerations, novelty, and team expertise remain common. However, the survey revealed that many funders also commonly instruct reviewers to consider additional elements such as the relevance of research to societal problems, sustainable development goals, and EDI considerations (like gender dimensions in both the research plan and team composition). The survey used a scale asking if criteria were currently instructed/recommended, used in the past but not anymore, not currently used but considering future use, or never used/not considering.

In terms of qualitative vs. quantitative assessment, the qualitative assessment of the content of research outputs remains dominant and shows no sign of being phased out. For metrics-driven approaches (like number of citations, H-index), the picture is mixed, with roughly as many funders reporting phasing in as phasing out. An interesting exception is a possible phase out of the crudest journal-level indicators (like journal reputation, presence on lists, and impact factors), where more funders reported phasing out than phasing in.

The survey also explored how assessment processes are evolving. The standard process involving external reviewers and panel ranking  is widespread but recognized to have issues such as being burdensome, conservative, potentially biased, leading to arbitrary outcomes near the funding line, and not effectively rewarding criteria other than those most directly connected to conventional understandings of academic excellence. Modifications are being made, though mostly what were termed “light touch” changes, such as using international assessors, virtual panels, embedding EDI in assessment, using interviews, and refining criteria, are used by 70-80% of funders. “More radical changes”, like partial randomization (which was discussed in DORA’s AP and AAE Q1 meetings), applicant anonymization, or distributed peer review, are used far less often. While there is appetite for change, the perceived risk associated with radical interventions is high, partly due to a still-developing evidence base for their effectiveness. An example explored in detail by the survey is Narrative CVs. While their use is welcomed by many respondents, their effects are not yet fully understood as they are relatively new interventions.

The survey contained a special section focused on the use of AI in research funding. The primary area where AI is being used is in reviewer and panelist allocation. It is not currently used much for portfolio analysis or strategic approaches. Views on potential future AI use are mixed, with both appetite and hesitancy present – highlighting that AI use is neither universal nor uncontroversial in this domain. Funders perceive both significant benefit and significant risk from potential AI use. Where AI is used, it tends to involve input from various parts of the organization and expertise is sourced internally and externally.

Based on these findings, the GRC working group and RoRI offered several recommendations for funders, including:

  • Foster a culture of experimentation: Encourage testing, trialing, and comparing a broad range of process interventions and novel indicators;
  • Publish evaluations: Make the results of evaluations of new processes and indicators publicly available to build a larger, shared evidence base;
  • Consult stakeholders: Ensure relevant stakeholders are consulted to improve buy-in for new practices;
  • Conduct further research: More research is needed to understand the specific needs, barriers, and drivers of RRA in different contexts globally;
  • Repeat the survey: Conduct a similar survey every 4-5 years to track progress and changes over time.

During the Q&A, discussions touched on RoRI’s support for funders wanting to experiment and publish evaluations, including their AFIRE (Accelerator For Innovation & Research Funding Experimentation) project. It was noted that public transparency of evaluation results is essential. The idea that there is no single definition of what ‘good’ looks like in research assessment, as needs and systems vary greatly across the world (a finding echoed by the AGORRA project), was also discussed. Private foundations, which are not included in the survey, were noted as potentially having more freedom to experiment with radical changes due to less reliance on taxpayer money or political pressure.

Beyond the GRC presentation, the meeting included updates from DORA and member organizations. DORA announced the recent launch of their Practical Guide for Implementing RRA for Research Performing Organizations (RPOs) in May. DORA is planning to develop a Practical Guide for Research Funding Organizations which will involve a co-creation workshop in December as part of the EU Conference on RRA reform in Denmark – you can register your interest here. DORA is also welcoming suggestions for country-specific events, such as the virtual event on Implementing Responsible Research Assessment in India on June 6.

Member updates included UKRI publishing a draft pan-UKRI research data policy and welcoming feedback on it from interested parties (deadline July 9). NHMRC is developing an action plan for RRA aligning with CoARA principles, which they plan to publish soon.

The meeting underscored the continued momentum and collaborative spirit within the DORA funder community and the broader RRA landscape. The GRC survey results provide valuable evidence on the progress being made globally, the diversity of approaches, and the key areas for future focus, particularly the need for experimentation and transparent sharing of results to build a stronger evidence base for effective RRA.

Share This

Copy Link to Clipboard

Copy