Building community around institutional change

By Penny Weber — HuMetricsHSS

On June 1, 2022, the HuMetricsHSS Initiative partnered with the Declaration on Research Assessment (DORA) to bring together the HuMetricsHSS Community Fellows and the DORA Community Engagement Grant Awardees to share knowledge and strategies on their respective—but interconnected—areas of work. The meeting brought together 23 academics working on projects looking to develop values-enacted frameworks or systems for evaluating scholarly work, implementing responsible research assessment practices, or both. After brief introductions, the meeting was split into breakout rooms, each of which contained a mix of HuMetricsHSS Community Fellows and DORA Community Engagement Awardees. 

While there is much overlap in terms of their research interests, the HuMetricsHSS Community Fellows are all based at institutions in the United States, while the DORA Community Engagement Awardees are based at institutions in Africa, Asia, Australia, and Latin America. Because international systems of academic assessment vary widely among countries and global regions, including differences in their level of adoption of alternative, values-based, or responsible metrics, the combination of perspectives presented in this meeting was particularly fruitful for all the participants. 

In previous meetings of the HuMetricsHSS Community Fellows, conversation had centered around opportunities and challenges each of the Fellows have faced at their institutions while working to bring about transformation on concepts of responsible, equitable, and expansive evaluation writ large. Drawing on those conversations, discussion in the breakout rooms for this joint meeting focused on questions of trust: how to build trust in a specific community and context, how to maintain that trust once it’s built, and what one might need (in terms of time, resources, networks, etc.) to do both.

One major theme that emerged across the groups was that of transparency and the need for a shared vocabulary of evaluation. Many terms in use when talking about alternative or values-based assessment can translate in different ways to different communities. One example that was discussed among the group was how the use of “qualitative” indicators without clearly defined evaluation guidelines and criteria can introduce the exact “subjective” biases that evaluators are attempting to eliminate. To this point, explicitly articulated and transparent evaluation criteria are a critical factor to help define values and expectations among organizations, evaluators, and applicants. Making evaluation criteria clear can help to foster trust between parties and address unintended biases. Shared vocabulary, as well as shared and transparent guidelines for evaluation, needs to be understood before trust can be built. Transparency of evaluation practices is also necessary for promoting trust within stakeholder communities outside of academia (e.g., patients, local communities, etc.)

Another major theme, although also related to that of transparency, was the importance of mentorship and guidelines for early-career scholars, as well as those stepping into positions for the first time, such as department chairs. Current systems of evaluation tend to treat mentorship as tertiary service work, but mentorship is often where interpersonal trust is built; interpersonal trust is necessary to establish trust in institutions and systems as well. By simultaneously modeling and rewarding excellent mentorship, those in the academy can build a new generation of scholars—and eventual evaluators—with increased trust in their colleagues, themselves, and their institutions. This model also addresses a need that was stressed in the discussion: to rework evaluation on multiple levels at once. Change can neither happen solely from the top down or the bottom up; faculty and staff are conditioned to resist top-down “dictatorial” change, but often feel they have little power themselves to enact change. Effective strategies for change must include offering resources and support to shift norms and guidelines at every level.

Finally, discussion also focused around the need for stable infrastructure regarding new forms of assessment, designed to maintain momentum and foster trust within the academic community. Ideally such infrastructure would be formal — an office of responsible metrics, for example, or an institution paying faculty for their time if they are doing work as champions of responsible evaluation — but formal infrastructure requires funding, campus support, and other resources the academics in the room may not have. In the absence of those resources, informal but fruitful collaborations and communities of practice can allow these champions to feel less alone in their work, provide perspective, and grant some semblance of authority or at least a scholarly community to which one can refer when facing pushback.

We hope, going forward, to foster such scholarly communities of practice and provide a space for champions, like the HuMetricsHSS Community Fellows and the DORA Community Engagement Awardees, to collaborate and learn from like-minded colleagues. To learn more about their work, please check out the HuMetricsHSS Community Fellows projects and the DORA Community Engagement Awardees’ projects


This post originally appeared on the HuMetricsHSS page and has been reposted to the DORA website with permission.

Haley Hazlett, DORA’s Program Manager, provided input on this piece as an editor.

Haley Hazlett
Dr. Haley Hazlett has been DORA's Program Manager since 2021. She was a DORA Policy Intern before taking the role of Program Manager. She obtained her Ph.D. in Microbiology and Immunology in 2021 and is passionate about improving research culture for all researchers.

Share This

Copy Link to Clipboard

Copy