Research assessment as a human-centered design problem

By Ruth Schmidt (Institute of Design IIT)

The term “design” often conjures images of tangible stuff, such as iPhones, interiors, fashion, or kitchenware. The discipline of “human-centered design,” however, typically focuses on more intangible problems related to how people live and work, using a design lens and tools to develop new or better experiences, services, and even entire systems.

Many of these intangible problems are “ABC”—ambiguous, behavioral, complex systems—challenges. Viewed through this lens, one can make a compelling case that research assessment is, in fact, a human-centered design issue:

It’s ambiguous —JIF and citations are intended to serve as proxy measures for the quality of research, but the attributes of what good looks like are diverse and sometimes indirect. From one perspective, adherence to research methodology may signal quality; from another, it may be the significance of results; from still others, it may be the novelty of looking at a known problem in a new way. Quality is also a moving target, where true impact may take years or even decades to manifest, and may be hidden in plain sight: the list of Nobel prize winners whose seminal work was initially rejected by journals (often multiple times) is lengthy.

It’s behavioral — The research process is full of metrics, rewards, and incentives, where what’s measured and rewarded has outsized impact on our choices and actions. Offering compensation for high-JIF publications or the formation of citation circles explicitly tap into these biases, but more tacit knowledge about what counts toward tenure and promotion decisions is also highly likely to sway our actions or constrain the options we even consider pursuing. This can be especially pernicious when biases reinforce or amplify inequities that exist within systems, or when prioritizing scholarly metrics limits research’s ability to productively contribute to the public good.

It’s a complex system — The world of research and research assessment—what gets funded, who collaborates with whom, and how value is generated and captured—all occur in an international, interconnected, and tangled web of entities and individuals, all of whom have their own interests and expectations. This complex system may strive for agnostic objectivity, but too often can act as a conduit to channel existing forces and assets. When control is unevenly distributed, the tendency to “feed the beast” may emerge in the form of assessment mechanisms that reward established players at the expense of more junior members.

*

As a former design strategist, I have seen firsthand how human-centered design strategies helped organizations become more innovative by directly addressing the reality that innovation activities tend to conflict with common organizational conditions—such as prioritizing easily captured, quantifiable, and short-term metrics like ROI—that support and reward the status quo. The challenge of rethinking research assessment is, in some significant ways, strikingly similar, suggesting that human-centered design strategies such as framing, mapping system flows, and designing futures might help us get purchase on this challenge as well:

  1. Framing forces specificity and provides a North Star set of principles to align around through the concrete articulation of three complementary sets of perspectives:

Institutional framing focuses on top-down systems or organizational-level goals, often aiming for well-established and quantified metrics. JIF and citation scores, in fact, are exemplars—if also cautionary tales—of this type of frame. As we have seen, relying exclusively on institutional frames can be problematic when used as a proxy for quality at the expense of more meaningful captures of value, or when adherence to institutional-level goals neglects to consider important nuances and needs of those on the ground.

User framing centers the problem to be solved on latent needs of people who participate within a system. In contrast with more pragmatic institutional frames, the intent of user framing is to surface more oblique thinking about what we’re even solving for, and is typically informed by insights derived from ethnographic-style engagements with users. In a research assessment context, this might touch on more qualitative and flexible ways to gauge research quality, or prioritizing the pursuit of research that positively impacts the public good to encourage researchers to tackle personally compelling and societally meaningful problems over chasing citations.

Behavioral framing situates us in the specific behaviors that we want to shift or cultivate, with the goal of more effectively defining conditions that take the abundance of cognitive biases impacting human judgment, decision-making, and action into account. In the case of research assessment, overcoming challenges such as quantification fallacy, social proof, or anchoring requires applying a behavioral lens to a variety of participants—researchers, assessors, funders—to understand what current contextual conditions are contributing to certain behaviors and to suggest behaviorally-informed solutions.

  1. Flows: Where framing forces us to define intent, mapping flows and exchanges of capital between different entities can help us understand more concretely how those systems function. Researchers, universities, funding bodies, and journals exchange both tangible and intangible assets such as money, intellectual property, prestige, and faculty appointments, yet each entity only controls or has access to a finite view into the overall network of nodes. Taking a systemic approach and tracing flows between entities can indicate where value is created but not captured, for example, and also surface critical leverage points where making small adjustments has the potential to yield outsize benefits.
  2. Futures: In the same way that organizations trying to be innovative must shift from a focus on short-term metrics to longer-arc measures of progress, taking a more longitudinal view can also inform new ways of evaluating or rewarding research activities and outcomes. Twenty years ago, social media in its current form didn’t exist, let alone as a factor in research assessment or impact, but ignoring the role of open access and non-academic channels for dissemination today risks being willfully naive. Applying a futures lens also mitigates the potential of unintended consequences, in which well-meaning solutions accidentally create perverse incentives or repercussions beyond their intended scope or terrain. Design’s “futuring” tools can help us actively cultivate forward-facing perspectives on technologies and emergent social norms to help ensure that solutions have meaningful staying power and relevance.

*

Change often requires a leap of faith in addition to an investment of effort, and is difficult enough in individual organizations where performance reviews, hierarchy, and social norms can be used to incent or set conditions for behavior. It’s far more challenging in the context of a loose network or community of practice like academic science, and will require a multi-faceted approach. Human-centered design is well-positioned to supplement the ongoing activity of sharing best practices and specific, successful examples of new research assessment strategies, contributing a deep understanding what matters to individuals and entities, and a perspective on realigning incentives, social norms, and points of leverage where we might redefine and reward what’s valued in the future.

Share This

Copy Link to Clipboard

Copy