Case Study

The Chinese Academy of Sciences

Interview conducted on March 03, 2025  Compare case studies

The Chinese Academy of Sciences (CAS), established in 1949, is China’s largest research performing organization, comprising a wide network of 106 research institutes, 3 universities and numerous commercial enterprises. CAS also serves the role of supporting national technological innovation and delivering science and technology advice to the Chinese central government. CAS has consistently employed research institute evaluations as a management tool and its Research Evaluation Center has been supporting reforming research assessment throughout the past decades. CAS experience has been one of historical shifts in evaluation systems, from a focus on quantitative metrics to more qualitative, mission-oriented assessments.

This case study shares the Major R&D Outcome-Oriented Evaluation System implemented by the CAS in recent years which emphasized impactful research and moved beyond purely quantitative metrics.

Who: Organization profile

Country China
Category Case StudiesResearch Performing Organizations
Profile of institution national academyresearch institutes
Number of FTE researchers ~70,000
Organization of research evaluation Faculty/department levelInstitutional/university level
Who is involved? academic leadershippolicy staffresearch professional staffresearch support or management staff

What: What changed and the key elements of change

The major R&D Outcome-Oriented Evaluation System (2011-2021/22) represented a move away from a strong reliance on simple quantitative indicators towards recognizing and rewarding significant research contributions. Its key elements included:

  • Focus on Major Research Outcomes: The primary emphasis was on the quality, impact, and significance of research outputs rather than just the quantity of publications or other easily quantifiable metrics. Institutes were expected to define and strive for "major breakthroughs" and contributions, focusing on their unique positioning, three major basic, strategic, and prospective science and technology innovative achievements, and five key potential directions (what was called the ‘One-Three-Five’ plan). Six types of potential major research and development outcomes were identified, including solving major scientific problems, creating new research fields, achieving breakthroughs in key technologies, providing solutions, achieving remarkable socio-economic benefits, and providing significant and influential advice.
  • Five-Year Evaluation Cycle: The system operated on a five-year cycle, aligning with national five-year plans and the CAS's strategic planning.
  • Multi-Stage Assessment: The evaluation process involved several stages:
    • Expert Diagnostic Assessment (mid-term): International experts were invited to diagnose the institutes' status and to evaluate the quality and technical value of their main research areas to help improve internal management and clarify core advantages. This was a formative assessment aimed at guiding the institutes.
    • Overall Performance Evaluation (end-term): Domestic experts provided qualitative opinions on an institute's performance compared to its five-year targets. Quantitative indicators (funds, projects, staff, outcomes, patents, awards, international exchanges) were provided as references for these experts. The results of this evaluation were directly linked to incentive resource allocation. Superior major breakthroughs merited significant financial competitive rewards.
    • Monitoring of Key Performance Indicators (KPIs): Annual monitoring of KPIs was conducted to observe and track the institutes’ research performance.
  • Limited Role of Simple Quantitative Metrics: While quantitative data was collected and used as a reference for expert panels, the final evaluation was heavily reliant on expert judgment of the significance and impact of the major research outcomes.

Introduced in 2022, the new Mission-Oriented Evaluation System signifies a further evolution in the CAS’s approach to research evaluation. It retains the focus on “major research output”, indicating that the philosophy of prioritizing impactful research contributions remains central, and shifting towards a more continuous monitoring and a broader consideration of factors.

Why: Motivation for change

Recognizing that a primary focus on quantitative metrics such as the number of publications, citations, and honorific titles had considerable adverse effects on the progress of science and technology in China, CAS adopted the Major R&D Outcome-Oriented Evaluation System around 2011-2012. This shift, aligned with the ‘Innovation 2020’ CAS strategy, aimed to prioritize high-quality and high-impact research outcomes that would lead to significant contributions in areas such as solving major scientific problems, achieving breakthroughs in key technologies, and providing solutions with remarkable social or economic benefits, rather than simply focusing on the quantity of publications. The focus for the new system was on serving society, the country and the economy, in addition to science itself.

In 2022, the Mission-Oriented Evaluation System was adopted, reflecting a desire for more continuous oversight and a broader understanding of institutional performance. The annual monitoring system allows for more frequent feedback and enables institutes to make adjustments more readily than under a five-year cycle. This can foster a culture of continuous improvement and greater responsiveness to evolving needs.

How: Processes and dynamics for developing, implementing and managing change

The evolution of research evaluation systems at the CAS, exemplified by the adoption of the Major R&D Outcome-Oriented System currently being furthered by the Mission-Oriented System, reflects a dynamic and adaptive approach to research management. The initiation of CAS’s own evaluation system in 1993 was a direct response to the perceived inadequacy and misunderstandings arising from external evaluations. Since then, new systems were developed to better align with the CAS’s evolving mission and national needs.

The CAS Headquarters played a central role in proposing and authorizing the changes in evaluation systems, reflecting its management responsibilities. The development process involved learning from international experiences, including German Max Planck Institute Assessment, and how the UK’s Research Excellence Framework (REF) defined impact. The development of each system included the definition of specific indicators and methodologies tailored to the objectives of that phase. For instance, the initial systems in the early 90’s heavily relied on quantitative indicators, while later systems incorporated peer review and expert assessments.

The results of evaluations have been linked to significant consequences, such as funding allocation and the appointment of institute directors. This creates a strong incentive for institutes to respond to the evaluation criteria and adapt their practices. The shift to influencing extra funding in the mission-oriented system represents a nuanced change in how evaluation results are used to manage performance.

The engagement with the central government regarding the reforms, and the adoption of the major R&D outcome philosophy at the national level, indicates a strategic effort to align CAS’s evaluation practices with broader national science and technology policy. The concept of focusing on major research outputs has had a broad impact, being recognized and adopted in other parts of the Chinese research system.

When: Timeline for development and implementation

The Major R&D Outcome-Oriented Evaluation System was developed and launched around 2011, following the announcement of the ‘Innovation 2020’ strategy and the ‘One-Three-Five’ plan. The system was in place for approximately ten years, encompassing two full five-year evaluation cycles (roughly 2011-2015 and 2016-2021/22).

The Mission-Oriented Evaluation System started implementation in 2022, and it is in its early stages of implementation, with the first couple of years of annual monitoring completed.

References

Cheng, J.-P., Li, X., & Xu, F. (2018). Science-evaluation reform on the road in China. In National Science Review (Vol. 5, Issue 5, pp. 605–605). Oxford University Press (OUP). https://doi.org/10.1093/nsr/nwy081

Xu, F., & Li, X. (2016). The changing role of metrics in research institute evaluations undertaken by the Chinese Academy of Sciences (CAS). In Palgrave Communications (Vol. 2, Issue 1). Springer Science and Business Media LLC. https://doi.org/10.1057/palcomms.2016.78

Qiu, J. (2011). China sets 2020 vision for science. In Nature (Vol. 470, Issue 7332, pp. 15–15). Springer Science and Business Media LLC. https://doi.org/10.1038/470015a