Each quarter, DORA holds two Community of Practice meetings for research funding organizations. One meeting takes place for organizations in the Asia-Pacific (A-P) region and the other is for organizations in Africa, the Americas, and Europe (AAE). These groups provide a space for funders to learn from each other, share new policies and practices, and collaborate on advancing fair and responsible research assessment (RRA).
Our March 2026 meetings brought together over 40 global representatives in our two sessions to explore a timely and complex topic: the role of Artificial Intelligence (AI) across the research funding lifecycle. While many organizations are still in the pilot phase, the discussions revealed a shared commitment to balancing the efficiency gains of AI with the core values of integrity, transparency, and equity.
Where is AI entering the funding lifecycle?
Inspired by discussions about DORA’s upcoming Practical Guide (learn more here), members mapped current and considered AI uses across five key “assessment moments”:
- Establishing Funding Programs: A potential use for AI is to summarize large public evidence pieces that can inform program definition, identify gaps in portfolios, and draft more inclusive call language. This also includes using tools like Microsoft Copilot for office productivity, including summarizing policy material and structuring documents.
- Making Funding Decisions: This remains the most sensitive area. While most funders prohibit AI in actual scientific triage, several are using it for administrative matching. For one funder, machine learning has shown potential to optimize application-assessor matching while managing conflicts of interest. Another has successfully piloted AI to draft consistent, accurate bios for assessors.
- Setting grant terms and conditions: Funders are formalizing boundaries to protect research integrity and confidentiality of the funding process. A clear distinction has emerged between the expectations for applicants versus those for reviewers. For applicants, most funders allow the use of AI in proposal preparation but emphasize accountability: they require applicants to certify that all information in their application is accurate, making them responsible for any factual errors or misinformation generated by AI tools. For reviewers, the main concerns is due to confidentiality and intellectual property risks: while most explicitly prohibit reviewers from inputting any part of a grant application into AI systems, as this could breach peer review principles and lead to a loss of custody over sensitive research data, others (like the German DFG) are now considering offering guidance on how to do it safely and responsibly.
- Communication and engagement with communities: Funders are currently exploring and implementing several AI-driven applications to streamline how they interact with the research community. This includes recording meetings, generating transcriptions, and distilling complex discussions into actionable minutes, but also translating technical language into plain and assistive formats, making funder communications more accessible to diverse audiences. Still, there is a concern that outward-facing content might feel “generic” or lead to a loss of trust if stakeholders perceive a lack of human engagement.
- Monitoring and Evaluation: AI’s ability to parse large datasets offers significant potential for output tracking. Large Language Models (LLMs) are also being explored to classify applications against categories or track downstream outcomes through metadata in other infrastructures like OpenAlex.
The human in the loop: balancing risks and rewards
A central theme across both the AP and AAE sessions was the distinction between low-risk administrative use and high-risk evaluative use.
- Applicant vs. reviewer use: Regardless of the approach, funders have laid responsibility on the applicants and reviewers. Trust in individuals following guidelines is guiding the approach to guarantee policy compliance.
- The “Black Box” challenge: Participants raised concerns about bias amplification and the opacity of AI decision-making. While AI can be a supportive tool, the human in the loop must always retain final responsibility.
- Technological sovereignty: Funders expressed concerns regarding intellectual and data protection, noting that many AI services operate under frameworks outside of their local jurisdictions.
Moving toward shared guidance
The meetings highlighted an urgent need for clear definitions and shared learnings. Many members noted that because AI features are becoming embedded in everyday tools (like Microsoft Office or Canva), “disclosing AI use” is becoming increasingly complex to define.
As we move forward, DORA remains committed to facilitating this global dialogue. We are currently finalizing a Practical Guide for RRA in funding organizations, which will include further reflections on navigating these technological shifts based on the RoRI handbook Funding by Algorithm. You can now register for the Guide launch events on May 13 and 14:
- Asia-Pacific: Thursday, 14 May 2026, 14:00:00 AEST, register here
- Africa & Europe: Thursday, 14 May 2026, 12:00 BST, register here
- Americas: Wednesday, 13 May 2026, 14:00 EDT, register here
Join the conversation
The DORA Funder CoP is a space for funders to learn from each other and collaborate on the challenges of modern research assessment. If your organization is interested in joining these quarterly discussions, please find more information on our funder discussion groups page and contact us at info@sfdora.org.