Gathering input for an online dashboard highlighting good practices in research assessment

 

Introduction to Project TARA

As institutions experiment with and refine academic assessment policies and practices, there is a need for knowledge sharing and tools to support culture change. On September 9, 2021, we held a community call to gather early-stage input for a new resource: an interactive online dashboard to identify, track, and display good practices for academic career assessment. The dashboard is one of the key outputs of Tools to Advance Research Assessment (TARA), which is a DORA project sponsored by Arcadia – a charitable foundation that works to protect nature, preserve cultural heritage, and promote open access to knowledge – to facilitate the development of new policies and practices for academic career assessment.

Overview of the dashboard

It comes as no surprise that academic assessment reform is complex. Institutions are at different stages of readiness for reform and have implemented new practices in a variety of academic disciplines, career stages, and evaluation processes. The dashboard aims to capture this progress and provide counter-mapping to common proxy measures of success (e.g., Journal Impact Factor (JIF), H-index, and university rankings). Currently, we picture the general uses of the dashboard will include:

  • Tracking policies: Collecting academic institutional standards for hiring, promotion, and tenure.
  • Capturing new and innovative policies: Enabling the ability to share new assessment policies and practices.
  • Visualizing content: Displaying source material to see or identify patterns or trends in assessment reform.

Because the dashboard will highlight positive trends and examples in academic career assessment, it is important to define what constitutes good practice. One idea comes from the 2020 working paper from the Research on Research Institute (RoRI), where the authors define responsible research assessment as: approaches to assessment which incentivize, reflect and reward the plural characteristics of high-quality research, in support of diverse and inclusive research cultures.

Desired use of the dashboard: who will use it and how?

We envision a number of situations where the dashboard can support research assessment efforts. For example, a postdoc who is entering the academic job market may use the dashboard to identify assessment norms and emerging trends. Whereas, a department chair may find the dashboard helpful in finding examples of how contributions to research integrity and open science can be evaluated for an upcoming faculty search.

Throughout the meeting, we heard from attendees that the dashboard should be useful to as wide a range of individuals as possible. Specifically, it should include source material that is relevant to users at all levels of professional development and geographical locations. It is especially important to account for regional context, given that different countries may have national policies that impact the extent of institutional reform.

These points led to a deeper discussion on the need to consider user experience. For example, we heard that it is important for users to be able to apply filters when searching for different types of source material on the dashboard, because filters can allow users to identify material that is relevant to their unique context (e.g., professional development, global region, etc.).

Dashboard functionality: what types of source material can be included in the dashboard?

The next phase in dashboard development is identifying source material. We learned about important considerations to keep in mind when identifying what types of material to include in the dashboard. Moving away from a narrow set of evaluation criteria is essential for holistic research assessment. But we also heard new evaluation criteria need to be carefully considered to avoid capturing biased or bad practices. This is an instance where applying a shared definition of good practice in academic assessment can be useful in selecting source material. Another idea is to include a wide variety of materials in the dashboard, such as responsible research assessment frameworks, policies, and tools. Here we heard unintended blind spots may influence the selection of source material. To this point, non-traditional work (e.g., teaching, outreach, diversity and equity work, community engagement) is often suppressed by faculty in their tenure portfolios out of fear of penalization. This can lead to skewed perceptions toward institutional norms during evaluation.

Another challenge is the lack of clarity as to what the term “impact” means in academic assessment. Impact is difficult to define and often conflated with citation metrics. In addition to the dashboard, we aim to develop resources to articulate how impact is captured and considered as part of Project TARA’s associated toolkit. A central theme of the conversation was the need to clearly articulate the meaning of “impact” in academic evaluation criteria. Participants suggested the dashboard would benefit from holistic evaluation practices that recognize and reward a variety of researcher contributions. This could include policies that recognize preprints, news coverage, white papers, teaching, mentorship, and research that addresses major national, international, or societal issues.

We learned from participants that users may also be interested in understanding how universities account for impact that is specifically local in nature (e.g., contributing to governmental policy formation and local engagement). Here it was reiterated that an important mechanism for capturing impact outside of academia is the use of narrative material (e.g., governmental policies or briefs, articles on local engagement activities and locally focused research, reports on quality of engagement drafted by patient advocacy groups). While outputs like these are useful for capturing important qualitative information about a researcher’s impact, tracking and generating them can be challenging for universities. Indeed, a similar challenge for the dashboard is determining how to capture the rich qualitative information stored in research assessment policies and practices. To gather a more holistic image of researchers for assessment, institutions must recognize and reward a broader range of academic outputs. Doing so would, by extension, provide the dashboard with more robust source material on assessment practices.

Next steps

The input gathered during the community call is being used to refine DORA’s vision and thinking for the dashboard. The next phase of dashboard development is identifying source material and determining data structure and visualization. DORA is organizing a November workshop to examine what makes “good” practice in research assessment and surface source material for the dashboard.

Haley Hazlett is DORA’s Program Manager.

Share This

Copy Link to Clipboard

Copy