Resource
Published — 2021

Inoculating Law Schools Against Bad Metrics

Journal articles  For:Research institutes

In their working paper, authors Kimberlee Gai Weatherall and Rebecca Giblin review the use of metrics in university and research policy with a focus on Australia. The metrics reviewed include grant income, journal bibliometrics and citations by authors and courts. This review aims to create awareness among legal scholars “of the full range of ways in which metrics are calculated, and how they are used in universities and in research policy. What is more, despite a large and growing research policy literature and perhaps an instinct that metrics are inherently flawed as a means to recognize research ‘performance’, few researchers are aware of the full scope of known and proven weaknesses and biases in research metrics.”

The authors focused on two areas of research assessment: research impact and the bucket of concepts variously described as mentorship, supervision, and/or leadership. The authors sought to reframe discussion around scholarly legal assessment. They suggest asking different questions to reframe the discussion: not “what impact have you had?”, but “what have you done about your discovery?”; not “how have you been a leader?”, but “how have you enabled the research of others?”.

The authors also present concrete suggestions for how legal scholars and faculties can build an environment of research assessment that is values-based:

  1. Awareness-raising and collective action
    “raising understanding across the discipline of the problems we have highlighted around research assessment is, we think, the first step. The first time that many academics (including us) engage seriously with how their research will be assessed, or how they can tell a story that clearly articulates what research and translation they are doing, and how it all fits together, is when they apply for promotion or a grant. Not only is this often too late – retro-fitted narratives are never as convincing – but it also makes it more likely that the only kind of evidence people will reach for is whatever numbers they’ve been told about (and seen cited in institutional precedents). We freely admit that, even though we are both experienced researchers and research assessors, and one of us has even been an Associate Dean Research, we didn’t appreciate the full extent of the use of metrics, or the empirical work demonstrating the many problems with proxies, until we started systematically investigating them for this chapter. We strongly suspect we are not alone.”
  2. Social accountability
    “Something that has been shown to work, however, is social accountability. This tactic ‘plays on our need to look good in the eyes of those around us.’ Experiments have shown that, when people discover their decisions may be reviewed by people whose opinions they care about, it helps them make better judgments (whether it’s about the grade their students should receive, or raises for workers) and bias all but disappears. This leads us to wonder whether there may be a role for assessment of assessors.”
  3. Systems adjustment and alignment
    “What we are advocating however is that the systems and the various reporting obligations should be broadly consistent, and ask the same kinds of questions, so that researchers can at least re-use and tweak material rather than rewrite from scratch on a regular basis, and so it’s clearer to emerging researchers what’s expected of them from the get-go.”

Weatherall KG and Giblin R (2021) Inoculating Law Schools Against Bad Metrics. doi: http://dx.doi.org/10.2139/ssrn.3772437