19 representatives from 7 research funder organisations participated in the last quarterly Asia-Pacific Funder Discussion Group hosted by DORA.
New DORA staff members, Liz Allen and Janet Catterall were introduced to the group. The position of DORA Program Manager is currently vacant.
DORA updates to the group included the upcoming implementation guide and guidance document, anticipated for early next year, and the imminent release of three toolkits, which emerged from the workshops DORA held in May 2024 with the Elizabeth Blackwell Institute/MoreBrains project. These workshops sought to find ways to increase equality, diversity, inclusion, and transparency in funding applications. The toolkits will focus on simplifying funding call structures, changing application processes to reduce likelihood of bias in outcomes (e.g. recognising a broader range of research outputs, narrative CV formats etc) and improving training for reviewers and evaluators. The team is also producing three case studies about the funding application process.
Participants then engaged in a roundtable discussion where members shared their work from the past year, including collaborations, asked questions of the group and suggested topics they would like the group to cover in the future. Common themes that emerged from these discussions included:
- Trialling new selection processes, fellowships and panels to foster greater inclusivity of underrepresented communities, particularly Aboriginal and Torres Strait Islander, Māori and Pasifika applicants and assessors, and exploring ways to engage with these communities.
- Experimenting with the inclusion of the narrative cv option in grant applications.
- Exploring alternative metrics for research assessment and new KPIs.
- Investigating different models that support strategic discretion in decision making to further equity and fairness
- Condensing and simplifying the application process
- Fostering a greater understanding of how artificial intelligence tools can be or are being utilised by applicants and assessors
The utility of artificial intelligence was further discussed in terms of the assessment process itself- can these technologies be used for the initial screen? To find new peer reviewers? To summarise panel results? How can an agency build such tools into the review process? Funders reported that AI tools had been trialled already for assigning peer reviewers and for aligning applications with assessors. Confidentiality is a big consideration so it is recommended that title and keywords only be used in prompts and not the abstract.
The last quarterly meeting for 2024 will feature a presentation from the Global Research Council RRA Working Group members Joanne Looyen (MBIE) and Anh-Khoi Trinh (NSERC)
Call for member presentations for 2025