Each quarter, DORA holds a Community of Practice (CoP) meeting for National and International Initiatives working to address responsible research assessment reform. This CoP is a space for initiatives to learn from each other, make connections with like-minded organizations, and collaborate on projects or topics of common interest. Meeting agendas are shaped by participants. If you lead an initiative, coalition, or organization working to improve research assessment and are interested in joining the group, please find more information here.
Status quo of research evaluations
Research evaluation practices have the power to impact academic careers and reputations positively or negatively with their far-reaching implications for funding, awards, career advancements, and other prospects. Although the way research is evaluated has a critical part in academic lives, the traditional quantitative approaches to research assessment are increasingly recognized as inappropriate. The indicators presently in use are overly reliant on narrow publication-based quantitative metrics that fail to adequately capture the full range of research contributions while limiting equity and innovation. They do not consider factors such as the quality of the research, the rigor of the methodology, or the impact on society or openness/interoperability. Such metrics can lead to disadvantages for those working in less mainstream or interdisciplinary fields, or for those who belong from less-privileged demographic backgrounds than others. Relying solely on such indicators can result in an incomplete and potentially unfair assessment of a researcher’s contributions to their field.
With the goal to support more responsible research assessment processes, guidelines and recommendations such as the Declaration on Research Assessment (DORA), Leiden Manifesto for Research Metrics, the Metric Tide and other individual university guidelines came into action to set the standards for responsible assessment. In 2018, the International Network of Research Management Societies (INORMS), a collective of research management associations and societies worldwide, set out to build a structured framework to embrace the shift towards more responsible approaches to assessment. INORMS has various initiatives, projects, toolkits, and guidelines to promote evaluation practices that prioritize and foster fairness, openness, inclusivity, transparency, and innovations. Two of their key contributions are 1) designing the SCOPE Framework for Research Evaluation and 2) the More Than Our Rank (MTOR) initiative.
During the first quarterly DORA National and International Initiatives discussion group call of 2023, Elizabeth Gadd, Chair of INORMS Research Evaluation Group (REG), provided updates on the work of INORMS, MTOR, and the Coalition on Advancing Research Assessment (CoARA), where she serves as Vice-Chair.
SCOPE Framework: a structured approach for evaluations
In 2019, the first version of the SCOPE Framework was released by the INORMS Research Evaluation Working Group. This framework was developed to be used as a practical guide to successfully implement responsible research assessment principles and to facilitate the use of more thoughtful and appropriate metrics for evaluations in institutions and organizations.
In 2021, INORMS REG published an updated version of the SCOPE framework that included a five-step process to be followed by organizations during evaluations: Start with what is valued, consider Context, explore Options for measuring, Probe deeply, and Evaluate the evaluation. Each of these five steps has been elaborated on in the practical guide for easy adoption. The operating principles behind this five-stage process are:
- Evaluate only where necessary
- Evaluate with the evaluated
- Draw on evaluation expertise
To help research leaders and practitioners drive robust evaluation processes in their institutions the working group has publicly shared several resources online. The guidebook also presents details of the process of change through multiple case studies to learn from. Some of the use case examples, also mentioned by Gadd in her presentation, included: Joint UK HE funding bodies who put deliberate efforts to redesign the Research Excellence Framework (REF), Emerald Publishing who are consciously ensuring more diversity in their editorial board, etc. Additionally, there are SCOPE workshops for institutional research administrative leaders to learn how to systemically adopt this framework.
Some of the strengths of the SCOPE framework are its holistic step-by-step approach, flexibility, and adaptability to different disciplinary and institutional contexts. The framework can be customized to reflect the specific goals and priorities of different stakeholders and can be used to evaluate research at various levels of the research evaluation food chain.
More Than Our Ranks (MTOR) Initiative: an opportunity to recalibrate university rankings
Because evaluation processes can also profoundly impact the “reputation” and funding of academic organizations, INORMS launched the MTOR initiative, which is closely linked to the SCOPE framework and seeks to provide institutions with a means by which they can surface all their activities and achievements not captured by the global university rankings.
Gadd published “University rankings need a rethink” in 2020, which highlighted the key findings from their work on evaluating the ranking agencies based on community-designed criteria. They found that most “flagship” university rankings barely incorporated open access, equality, diversity, sustainability, or other society-focused agendas into their criteria for global rankings. This work sparked many discussions on how the current university ranking system may be inadequate and harmful because it does not “meet community’s expectations of responsibility and fairness”. So, there was a necessary change needed to bring a sense of accountability amongst rankers to determine what matters most for each university.
The INORMS REG believes that any institution, even a top ranker, has more to offer than what could be captured by parameters used in the current ranking systems. This was the foundational drive behind pioneering the MTOR initiative in October 2022, which encourages institutions to declare in a narrative way their perspectives on their unique missions, activities, contributions to society, teachings, etc., and explain why they are more than the overhyped university rankings. Gadd emphasized that signatory institutions are not required to boycott rankings altogether.
The REG has also provided guidelines on their website for Higher Education Institutes (HEIs) to learn about how to participate in the MTOR movement and also for individuals from the community to contribute in encouraging their universities for being a part of the MTOR initiative. Organizations like Loughborough University, Keele University, Izmir Institute of Technology, and Queensland University of Technology (QUT) are some of the early adopters of MTOR. However, it was also discussed that one major challenge for institutions to participate in the MTOR movement is due to their financial dependence on global rankings for their fundings, etc. in the current system.
Finally, Gadd shared updates from CoARA, which is a network bringing together stakeholders in the global research community (viz., research funders, universities, research centers, learned societies, etc.) to enable systemic level reform for the use of responsible, and effective research assessment practices.
CoARA: a common direction for reforming research assessment
After its international iterative creation, facilitated initially by the European University Association (EUA), Science Europe and the European Commission, the Agreement on Reforming Research Assessment was published in July 2022. Organizations willing to publicly commit to find a path to improve their research assessments can sign the Agreement and also be eligible to be a part of the Coalition to be actively involved in decision-making processes within the CoARA. The Agreement, which built on the progress made by earlier responsible research assessment guidelines and principles (DORA, Leiden Manifesto, Hong Kong principles, etc.), consists of 4 core commitments: 1) recognizing diverse contributions during assessments, 2) using qualitative peer-review-based metrics over quantitative indicators for research evaluation, 3) avoiding uses of journal and publication-based metrics and 4) abolishing the use of university rankings during assessments. These four core commitments are accompanied by 6 supporting commitments related to building and sharing new knowledge, tools and resources, and raising awareness within the community. Within 5 years of becoming a signatory, organizations have to demonstrate the changes made for reforming research assessment at their institutions.
As a newly established association, CoARA had the first General Assembly meeting on 1st December, 2022 since when the secretariat role was handed over to the European Science Foundation – Science Connect (ESF-SC). CoARA has recently opened the first call for Working Groups and National Chapters in March 2023, and will update on more General Assembly meetings and many other activities, such as webinars, conferences, etc. to build the network stronger and initiate dialogues amongst the different CoARA stakeholders with relevant evaluation initiatives and community of practices. Gadd’s talk was followed by discussions on the working group’s probable focus areas, including peer reviews, responsible metrics, funding disparities, etc.
United collaborative efforts from the research community, including individuals, universities, funders, initiatives, etc., are vital to push forward and evolve responsible research assessment on a systemic level.
Sudeepa Nandi is DORA’s Policy Associate