The Colombian responsible metrics Project: towards a Colombian institutional, methodological instrument for research assessment

A DORA Community Engagement Grants Report

In November 2021, DORA announced that we were piloting a new Community Engagement Grants: Supporting Academic Assessment Reform program with the goal to build on the momentum of the declaration and provide resources to advance fair and responsible academic assessment. In 2022, the DORA Community Engagement Grants supported 10 project proposals. The results of the The Colombian responsible metrics Project: towards a Colombian institutional, methodological instrument for research assessment are outlined below.

By César Pallares, Salim Chalela, María Alejandra Tejada, Elizabeth Bernal, César Rendón, Alida Acosta, Lorena Ruíz, Hernán Muñoz — Asociación Colombiana de Universidades, Asociación Colombiana de Editoriales Universitarias, Consorcio Colombia, CoLaV, Observatorio Colombiano de Ciencia y Tecnología, Red GCTI, COREMA (Colombia)

In March 2021, Colombia launched a new edition of its biannual assessment of research  groups. As usual, this event generated reactions from the academic community, and a debate  emerged around the suitability and use of this assessment exercise. However, that reaction  differed from previous years since the community was more organized than before, and the  critiques and recommendations were more structural and intended to contribute to the country.  Different academic networks summed up efforts to raise the level of the debate and to include  representation from those who have studied the evaluation framework and those responsible  for helping the researchers who are evaluated.  

One of those efforts was the responsible metrics initiative. Leading Colombian institutions,  such as the Colombian Association of Universities -Ascun-, the Colombian Association of  University Publishers -Aseuc-, the Colombian Observatory of Science and Technology -OCyT- , the Colombian Association of Research Managers -COREMA, the Colombian Network of  Management and Governance of Science and Technology -RedGCTI-, the Collaboratory for  Computational Social Sciences -CoLaV UdeA-, and the Consorcio Colombia, worked together  to facilitate and foster scenarios to discuss the potentialities and limitations of responsible  metrics for research evaluation in Colombia. 

The coordination team, consisting of representatives of the institutions mentioned above, was  responsible for gathering the insights and reaching the two purposes of our initiative: to  propose a policy brief aimed at changing the research assessment at the national level, and  to develop a Colombian rubric that helps institutions to design their self-assessment  framework. To reach this goal, we defined two methodological steps. First, we organized  seven international seminars in which experts shared their perspectives and experiences  around research assessment. We had the participation of Inorms, Research on Research  Institute, Ingenio Institute, the CWTS of Leiden, DORA, Folec, and others. Second, with the  contribution of universities, we organized two commissions, one to propose the policy brief  and the other to develop the Colombian rubric. 

Our first output was a concerted definition of 13 problems associated with the Colombian  research evaluation system. Those are:  

  1. Evaluation disconnected from the country’s reality  
  2. Lack of knowledge of alternative ways of doing research evaluation
  3. Standardization of the measurement method  
  4. Incentive schemes that generate inappropriate behavior 
  5. Lack of articulation among the actors of the science, technology, and  innovation system on research evaluation criteria  
  6. Lack of funding for STI  
  7. Economic interests that skew the evaluation and focus on individuals
  8. Delegitimization of evaluation as a valid exercise to promote research
  9. Economic interests of external stakeholders as responsible for the definition  of evaluation models 
  10. Definition of quantitative metrics focused on journal indexing systems that  lose sight of research quality. 
  11. Lack of open spaces to discuss evaluation models based on consensus  building. 
  12. Resistance to change in certain system actors reduces the possibility of  exploring other possibilities. 
  13. The lack of interoperability between existing information systems in the  country makes it challenging to generate alternative metrics and indicators. 

From these problems, this project found that the institutions of higher education (IHE) are  mainly affected by three of them: lack of knowledge of alternatives to assess research(2), the  national assessment ecosystem(10), and resistance to change(12). Focusing on them, we  developed a strategy to design our tool to promote assessment change in Colombian  institutions.  

The next step was to interact with international standards and rubrics. We selected three: 1)  Scope from INORMS, 2) SPACE from DORA, and 3) FOLEC. We studied their steps, the  recommendations they supplied, the cases that implemented those rubrics, and which lessons  they learned. This information allowed us to see the common points we can use in our  exercise: the need to configure an assessment committee to prevent the possible indirect  impact the assessment might cause and evaluate the assessment strategies. 

Using, as a starting point, the insights from these already developed frameworks, we settled ourselves to develop a rubric tailored to the Colombian system and research management  practices in the country. 

We develop a Colombian rubric from those inputs to help design the assessment exercise.  Our rubric has five stages. In the Ideation stage, the University creates the steering  committee of the assessment, with principles of diversity (gender, discipline, age, ethnicity,  among others), whose role is to define why the evaluation is necessary for the University. In  the Design phase, the University establishes different options using design thinking tools to  solve the institutional challenge that requires an assessment. The selected way is tested in 

the pilot stage to find unintended outcomes, identify the public that might be discriminated  against by the review, and receive feedback on the process. In the implementation stage,  the evaluation procedure is carried out by the University, but it should include the possibility  of being changed if the institution finds any significant problem; in this sense, the process to  change the evaluation should be clear from the beginning for all stakeholders. Finally, the  evaluation stage tries to understand what worked and did not in the assessment, so the  institution learns for future assessment exercises.  

Once the rubric was completed, we organized five focus groups with stakeholders and experts  on research metrics. They gave us feedback to improve the tool and alerted us about  shortcomings its implementation might carry. Lastly, six months later (July 2022), we ran a  workshop where research vice presidents gathered to discuss responsible metrics. To analyze  our proposal, we organized the workshop around two moments:  

First, presentation of responsible metrics and rubrics: the first activity we conducted was to  present the responsible metrics framework and the Colombian initiative. We focused more on  describing SCOPE, SPACE, FOLEC, and our proposal. 

Second, working groups around selected topics: we organized eight working groups, each  responsible for analyzing a specific hypothetical scenario. Each group had to solve three  challenges: 1) Select the principles that should orient research evaluation, 2) Define the  conception of quality and which should be the desired characteristics for the University in that  scenario, 3) Construct the profiles of the members of the steering committee for the  assessment scenario. 

The topics around which the workings groups were working are:  

  1. Research Awards (César Pallares) 
  2. Select which research projects should get a (Salím Chalela) 
  3. Hire new professors at the University (Hernán Muñoz) 
  4. Promotions in the research career (Elizabeth Bernal) 
  5. To give incentives (financial or not) to increase research performance (María Alejandra  Tejada) 
  6. To select postdocts or Ph.D. holders to work in the institution (Alida Acosta)
  7. Define the criteria to select the research papers that could get their APC funded (César  Rendón) 
  8. To select research books to be published (Lorena Ruíz) 

This workshop was a terrific opportunity to show the research directives that new frameworks  to assess research is possible. The next step was to build a website for the initiative, making  it easier for the scientific community to access information on responsible metrics in Spanish  and to see the alternatives they can use to assess their research performance (https://ww.metricasresponsables.co). We produced this website thanks to the support of  DORA’s community grant.  

In addition, we developed resources that can help researchers to understand responsible  metrics and institutions to apply new frameworks to access research. To do that, we have  developed infographic materials to disseminate the logic of responsible metrics and the results  of our initiative. We are pleased with the results of our work. Responsible metrics are now a known concept in  the Colombian system of R&D, and increased agents are exploring it to change their  evaluation practices. We understand that the work does not stop with what we have done, but  our goals have changed as we finish this project. First, we will update our resources and our  website with new developments that might be built in our country (for example, some  institutions members of this initiative are working on technical guidelines of research metrics)  or at the international level (as the new toolkits that DORA has been working on). We look to  supply a website that researchers can see as an updated source of information. Second, as  organizations, we hope to keep our efforts to promote and disseminate the use of responsible  metrics in institutions so the rubrics keep momentum. Finally, we will contribute to the  academic community by analyzing responsible metrics and the experiences we gained with  this initiative. Therefore we can contribute to expanding the available knowledge about new  ways to assess and measure research.

Haley Hazlett
Dr. Haley Hazlett has been DORA's Program Manager since 2021. She was a DORA Policy Intern before taking the role of Program Manager. She obtained her Ph.D. in Microbiology and Immunology in 2021 and is passionate about improving research culture for all researchers.

Share This

Copy Link to Clipboard

Copy