Institutional challenges and perspectives for responsible evaluation in Brazilian Higher Education: Projeto Métricas DORA partnership

A DORA Community Engagement Grants Report

In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals. The results of the Institutional challenges and perspectives for responsible evaluation in Brazilian Higher Education: Projeto Métricas DORA partnership project are outlined below.

By Jacques Marcovitch, Justin Axel-Berg, Pedro Belasco, Dulce Silva, Elizabeth Balbachevsky, Luiz Nunes de Oliveira, Marisa Beppu, Nina Ranieri, Renato Pedrosa — Projeto Métricas/Fapesp (Brazil)

In recent years, responsible evaluation has become a topic of fierce interest in the academic community. Brazil has around 1,200 individual signatories; around the same number as France, Spain and the United Kingdom. The country also has 391 institutional signatories, more than 150 more than the United Kingdom, in second position. Despite this, actual examples of responsible evaluation are few and far between – most evaluation is heavily quantitative at course, departmental and individual level. Of the institutional signatories, the majority are academic journals and scientific associations. Just two of the country’s large public universities are themselves signatories – the University of São Paulo (USP) and the State University of Campinas (Unicamp).

Brazilian higher education is at a crucial moment in its evolution in which it finds itself called upon to justify the public investment placed in it. It must ensure that it is engaged in advancing the frontiers of knowledge and that this knowledge is spread as widely as possible and must find answers to the multiple overlapping crises afflicting Brazilian and global society. There is increasing awareness that the tools used to measure scientific performance do not conform to the values and expectations that Brazilian academia and society in general expect from it. They are, in general, too quantitative, and too focused on the wrong things.

Our project began with an exploratory qualitative survey of individual signatories at these two universities to identify the perceived barriers to implementation of the recommendations of DORA. Of a possible 140 signatories, we received 37 responses. These responses were then collated according to the SPACE rubric and turned into a briefing comprising the key observations.

This briefing was then sent to a panel of twelve specialists and senior university leaders, all with significant experience and knowledge of responsible evaluation. The panel was composed of representatives from USP, Unicamp, Unesp, UFABC, Unifesp, UFES and UFF, who were invited to share their reflections and experiences in identifying the challenges and barriers to increasing the spread of responsible evaluation. A document that highlighted the key themes identified was produced.

To maximise the institutional reach of the initiative, a decision was made to hold the public event online, allowing representatives from institutions from across Brazil to attend. One of the challenges we faced with this is that some institutions are heavily internationalised, and moderately advanced in discussions about responsible evaluation. Others, meanwhile, attend predominantly to local priorities, and are at a less developed stage. Because the interest in responsible evaluation in Brazil comes predominantly from institutions and individuals, and not from government or external requirements, the situation is highly heterogeneous. Therefore, care was taken to ensure that recommendations can be adopted by institutions with no experience in dealing with qualitative evidence alongside quantitative and those with more extensive experience.

The final public event was held online on August 19th 2022, a video recording can be found here. The event was opened by the vice rectors of three of the most important public universities in Brazil, and each of the three key priorities identified were presented by Paulo Nussenzveig (USP), Marisa Masumi Beppu (Unicamp) and Patrícia Gama (USP). The event had 214 total registrations, with around 150 in attendance, representing 93 different institutions and faculties from every region of the country. In evaluations of the event, participants were asked to identify how they planned to apply what they had learned, and plans to introduce DORA in departmental evaluation, institutional regimes, hiring processes and federal funding agency committees were identified.

Finally, these results were synthesised into a document that is intended to serve as a guide for university leaders to plan and implement more responsible evaluation practices. This document will serve as the steering document for activities by Projeto Métricas in 2023. It can be found here. This document will be hosted on the Projeto Métricas portal, and a printed edition will be produced to enable members of the Métricas community to distribute and use in institutional discussions.

From the report, three main priority areas were identified:

Awareness of responsible evaluation

Strategies are needed to raise the general awareness that can either lead up to or immediately follow adhesion to DORA. Students should be made aware of the importance of evaluation of courses. Early career researchers should be made aware of the principles of responsible evaluation, as should those entering the university and more senior members of staff engaged in evaluation itself.

Given the diversity of areas of knowledge, institution types, career trajectories and socioeconomic factors present in Brazilian higher education, a wide variety of models for evaluation need to be established to ensure that this diversity and heterogeneity of mission, value and outcome can be respected.

Training and capacity building

Beyond the lack of knowledge of DORA or other documents, there is a clear problem with a lack of experience or capacity on the part of evaluators and those being evaluated. Where evaluations with more qualitative or flexible components exist, the quality of responses and evaluations is often inadequate. The road to more responsible evaluation requires training programmes and extra education to ensure that a culture of impact driven, and responsible evaluation is successful.

Evaluators, even when faced with large volumes of qualitative information about impact, are likely to depend on “citizen bibliometrics”, and easy measures that can justify decision making, even when inappropriate.

Without clear guidance and training on how to think about, write about and gather evidence for the impact of their work, researchers submitting their work for evaluation are likely to either rely on quantitative measures, or on unsubstantiated statements. They require training from the beginning of their career to plan research projects, execute than write about them effectively.

Processes should then consider different levels of evaluation to select the appropriate instrument for measurement. While each level has specificities and peculiarities that must be considered to ensure that evaluation is appropriate, it is important that the interaction between levels is considered, ensuring that the results measured at one level contributes to the stated goals of the others. In this sense, evaluation is a holistic activity that balances individual interests with institutional goals.

Groups of evaluators should be identified who carry institutional memory and experience of previous cycles, are able to carry out the present cycle, but are also engaged in planning and giving feedback for future cycles of evaluation. This group should assess the quality of the assessment according to the stated ambition of the unit being assessed and compare the results of this assessment with other processes in different areas of knowledge and other institutions.

Evaluation needs to have meaning. This is achieved either by celebrating and valuing outstanding achievement, or by highlighting where performance did not reach its intended goal. The reasons and justification for this performance must be clearly explained and understood and must lead to clear recommendations for future cycles of evaluation.

Execution and appraisal of evaluation

To identify and evaluate what is meaningful, the ideal time cycle for evaluation must be identified, and processes planned and produced according to a timeline.

Proper planning of evaluation cycles also prevents repetition of evaluation exercises and needless duplication of processes. Given that evaluation exhaustion is a well-documented phenomenon in higher education, with staff required to fill in the same information multiple times for different purposes, minimising it increases acceptance of new processes.

The evolution of evaluation also requires careful planning of actions over the short, medium, and long term. Sudden and dramatic change will be difficult, if not impossible to enact within universities, and so a clear idea of long-term goals reinforced by short term actions and priorities.

Objectives should be discussed and constantly revised for each successive cycle of evaluation. Because institutional objectives change over time according to internal and external factors, evaluation must also change over time to reflect shifting priorities. This review should be planned during an evaluation cycle, to be ready for the following one.

The next steps…

Having launched an initiative of national reach, and a document of consensus around the challenges and possible solutions, we must now work on consolidating a network of professionals engaged in changing evaluation. Because of the high heterogeneity we identified, helping higher institutions to establish and pilot models that are appropriate for them will enable Brazil to convert this growing demand for responsible evaluation of higher education into concrete results.

Suggested citation: Projeto Métricas (2022). Institutional challenges and perspectives for responsible evaluation in Brazilian Higher Education: Projeto Métricas DORA partnership summary of findings. University of São Paulo. [pdf], Brazil available at https://metricas.usp.br/institutional-challenges-and-perspectives-for-responsible-evaluation-in-brazilian-higher-education/

Haley Hazlett
Dr. Haley Hazlett has been DORA's Program Manager since 2021. She was a DORA Policy Intern before taking the role of Program Manager. She obtained her Ph.D. in Microbiology and Immunology in 2021 and is passionate about improving research culture for all researchers.

Share This

Copy Link to Clipboard

Copy