Ideas for responsible research assessment in the Asia-Pacific region

Discussions about research integrity are prompting a reevaluation of research culture, including academic assessment. To understand the opportunities and barriers to improve academic assessment in the Asia-Pacific region, DORA hosted its first webinar in collaboration with the Australasian Open Access Strategy Group (AOASG; now Open Access Australasia) on Thursday, July 2, 2020. Panelists included Michael Barber, Australian Academy of Science; Yukiko Gotoh, The University of Tokyo; Xiaoxuan Li, Chinese Academy of Sciences; Donna McRostie, University of Melbourne; and Justin Zobel, University of Melbourne.

At a comprehensive institution like the University of Melbourne, there are always conversations about performance assessment, says Zobel. The university decided to sign DORA earlier this year after a range of voices came forward suggesting the university reexamine some practices of concern. In response, the university created a working group that ultimately recommended adopting the principles of DORA and the Leiden Manifesto. Zobel was surprised by the level of support the proposal received at the university. It was clear that many faculty and staff wanted peer review and peer assessment uniformly recognized alongside journal-based indicators as measures of success.

The situation in Japan is very different, says Gotoh. In 2016, Japan’s cabinet office under the prime minister recommended institutions consider DORA. Despite this endorsement, no Japanese universities or funders have signed the declaration. While researchers recognize the limitations of journal-based indicators, they are often viewed as more objective, according to Gotoh. Without them, academic assessment can be biased by personal relationships, gender, and academic background. She sees new indicators as essential to move forward with responsible assessment practices.

According to Zobel, signing DORA can send an important signal to the university community. Libraries play an important role in making sure the message is received through outreach and advocacy. This is an ideal situation for librarians to apply their bibliometric expertise and facilitate discussions about responsible assessment indicators, says McRostie. People have different understandings of what it means to be a DORA signatory. Education and outreach are essential next steps.

The university’s signature can also be used as a tool. For example, when addressing academic assessment behaviors that are not compatible with the spirit of the declaration, Zobel points to the signing of DORA as evidence the behavior violates university values. Signing DORA has benefits for recruitment too. Zobel found that researchers recognize the university’s commitment. “They will say that it is great Melbourne is a university that will look at me and not just numbers about me.”

University rankings can directly and indirectly influence research culture and academic assessment. Zobel believes that rankings are working if they follow quality. Melbourne identifies quality through a range of tools, including some metrics. But Zobel emphasizes that metrics should be used as a pointer. They are valuable in helping to know where to look, but should not dictate where you look or what decision you make. Gaming rankings is not going to work for universities in the long term, Zobel believes. The focus should be on the enduring quality of work instead of numbers. To measure this, Zobel points to peer review and listening to a breadth of voices to understand specific contributions. But it is also being sensitive to drivers that make an institute work well and identifying performance measures that encourage teamwork and a positive work environment.

The Chinese Academy Sciences (CAS) is considered as both the country’s highest academic institution for the natural sciences and the highest science and technology (S&T) advisory body, says Li. Similar to the National Academy of Sciences in the United States, the CAS has a board of Academies and, similar to the International Max Planck Research School, it has its own research institutes.

The CAS plays two roles in setting research assessment policy and practices for institutions in China. First, the Academic Divisions of the Chinese Academy of Sciences (CASAD) launches initiatives in the science and technology community and puts forward suggestions to improve research assessment to the state and departments, says Li. The second role is reforming the institute’s own evaluation system.

Over the past 20 years, the CAS has transitioned from using quantitative indicators in research evaluation to focusing on major research outputs. They have found this switch has had a positive but limited effect on researcher behavior, because researchers are still easily affected by external quantitative evaluations.

The CAS works with other institutions to promote research assessment reform, says Li. For example, the Chinese Science and Technology Evaluation Special Committee, affiliated with the CAS evaluation center, promotes good practice by organizing national seminars on research assessment, where participants learn from each other’s experiences. They also hold science and technology management practice seminars in collaboration with higher education institutions.

The CAS plans to continue serving as a national science and technology think tank by putting forward ideas to improve evaluation, such as discouraging evaluation practices that reward researchers with additional resources based on metrics.

Unlike the dual roles of the CAS, the Australian Academy of Sciences is an honor society, like the Royal Society in the United Kingdom. Because the Academy does not hire and promote researchers, its influence is largely as an exemplar in how they acknowledge the best and brightest in Australia. Barber says the Academy’s fellowship is not as gender diverse as it needs to be. To build a fellowship that is more representative of the Australian population, the Academy has developed a structured and explicit program of unconscious bias training that Sectional Committee members are required to attend. Barber says the academy has received positive signals from committee chairs who feel members are being more reflective.

The University of Melbourne has campus-wide unconscious bias training. Some departments have also found success in separately shortlisting men and women for hiring. While there is no formalized training at the University of Tokyo, Gotoh agrees that acknowledging bias will lead to better assessment.

Barber believes shortlisting is a vital step in a section of fellows or in any faculty hiring process. DORA offers ideas to improve the triage phase of faculty hiring, including instilling standards and structure into the process through standardized narrative CV formats and assessment matrices. Selection committee chairs have the most power in the process. Zobel challenges chairs to ensure assessment processes are open, fair, and consider all the dimensions of a candidate. To do this, departments need to repeatedly test and evaluate assessment practices.

The Australian Academy also wishes to recognize scholars from diverse scientific disciplines, including new and emerging fields. In doing so, Barber says it is important to recognize how different fields view success. The Academy instructs selection committees to use peer review instead of metrics. Barber acknowledges the process is not perfect, but believes it provides a strong steer to selection committees to critically examine key publications and accomplishments. Barber believes that research assessment is a good topic for the Academy to address. Because the Academy is not an academic employer, they occupy more of neutral space.

Share This

Copy Link to Clipboard

Copy