On Wednesday February 12, 2020 DORA hosted a community interview with Chris Pickett, the Director of Rescuing Biomedical Research (RBR), a non-profit initiative dedicated to addressing systematic flaws in the United States (US) biomedical research system. In the interview, Pickett discussed hypercompetition, training grants, and why faculty search committees should look beyond funding records when selecting candidates for a job.
The Problem with Hypercompetition
Many of the systematic flaws embedded in today’s biomedical research system stem from hypercompetition. Competition, Pickett points out, was incorporated into the modern research system as a way to elicit the best science and to promote the best scientists. As scientists compete for grant money, jobs, and publishing opportunities, the system can—in principle—select and promote the best research and researchers.
Over the last 15 years, however, a culture of hypercompetition has developed due to several simultaneous and opposing trends. The number of people earning a PhD in the natural sciences each year increased by 66% worldwide (by 44% in the US) from 2005 to 2017; meanwhile, the availability of research funding has failed to keep up with demand. For example, from 2002 to 2014, success rates for research grants from the National Institutes of Health (NIH) and the National Science Foundation (NSF), both major US funders, dropped 39% and 24% respectively. In addition, the majority of postdocs still would like a tenure-track faculty position, even though the number of these positions is declining.
These trends have led to a climate of hypercompetition, harming both the progress of research and the scientists conducting it, Pickett emphasizes. Instead of improving the system, hypercompetition creates additional burdens. For example, hypercompetition for grants forces researchers to dedicate large amounts of time and effort to securing funding, restricting the time available for conducting research. This ultimately undermines the progress of biomedical research and the clinical innovations that research enables.
Preferential Hiring
In addition, hypercompetition influences the diversity of the biomedical workforce as it creates an environment that permits biased decision-making. When hiring committees are overwhelmed by high numbers of applicants, they have less time to conduct a careful evaluation of each candidate. Cognitive biases can influence decision-making; these include halo effects (assuming that one accolade is indicative of others), availability heuristics (giving more weight to information that is easily recalled —such as an applicant having earned a specific grant), and ingroup favoritism (preferring people similar to ourselves).
The Success Triangle
One can look at the three elements needed for career success in science—employment, funding and publications—as three corners of a triangle, according to Pickett. Once a researcher has achieved success in one corner, success in another corner often follows as a result. For example, grant money enables scientists to undertake new research projects, and projects lead to publications. From the flipside, it’s hard to conduct research and produce publications without grant funding. Employers and funders require publications as an indication of scientific talent, however, and without enough of them to evidence one’s talent and potential, it’s very difficult to make progress in one’s career.
Unfair advantage?
Pickett discovered that it’s becoming increasingly important for researchers to secure funding before getting faculty positions. Specifically, the ‘K99’ grant from the NIH is viewed by many US-based postdocs as being a critical step for attaining faculty positions. Holding a ‘K99’ grant affords a researcher the advantage of being able to self-fund their transition from a postdoc position to an independent role as a principle investigator, wherein the grant converts to a ‘R00’. Essentially, the researcher is then able to self-fund their work as they establish their lab—an advantage to the hosting institution. While this allows for career stability and mobility, it also creates a divide in the financial attractiveness of applicants. Those without a grant then have a less compelling profile to a potential employer. The long-term implications of such a dynamic can be explained in part by the ‘Matthew Effect’ wherein researchers who have once received recognition for excellent work continue to receive increasing numbers of accolades than accomplished peers. There are other problems with using training awards as a proxy measure of researcher quality, because it gives room for unintended biases like halo effects and availability heuristics.
To understand how the influence of training awards on career progression may have changed over time, Pickett looked to the NIH Grant Reporter, a large public database. Specifically, he wanted to understand the how receiving an early-career training award from NIH might impact a researcher’s chance of receiving funding at the point of establishing independence. He examined data between 2000 and 2017 on the allocation trends of grants at the training and newly independent investigator stages, and found an increasing association between the receipt of a training award and later receipt of an early-independence award. Simultaneously, when looking at funding allocation to independent investigators, he observed a greater decline in the success among applicants who had never received an NIH training award in the past as compared to those who had.
The Problem with Preferring Grantees
One possible explanation for the observed trends in hiring of K99/00 grantees, according to Pickett, is that some universities may preferentially hire researchers who have a grant in hand. This could be problematic if a hiring decision would be made partly on the basis of a grant allocation. Because doing so leaves out the importance of other skills, including teaching, mentorship, and service. According to Pickett, when hiring, we have the chance to choose our future colleagues, and therein bring valuable talents to a department. But to do that, we need to understand all aspects of applicant’s portfolios, including their soft skills.
Good practices are the way forward
As we often say at DORA, it comes down to the need for a holistic assessment process when making hiring decisions. We know there’s no silver bullet to cure the ills of the biomedical research system, nor those of the wider research system at that. But to start chipping away at these systematic issues, the best tools we have at hand are examples of changes organizations around the world have taken. We’ve documented many such examples in our collection of good practices. While these new practices may not be one-size-fits-all solutions, they can shine light on different approaches that the research community can take in improving research assessment.
While top-down approaches can be useful, Pickett points out, the problem is that hiring decisions occur at the department level, not at the institutional level. Institutional rules are necessarily ‘one size fits all,’ but the approaches in how these rules will be implemented must vary based on the needs of each department. Pickett emphasizes that “at the level of departments, that is where the work is going to be done.” He encourages people at all levels to raise their voices and change their departments from within. For example, by asking senior department members how decisions are made, post-docs can raise awareness of intrinsic biases and influence the attitudes of others. But, the onus for changing the status quo shouldn’t only be on younger people—anyone with an open eye to what’s going on should ask questions, raise discussions, and encourage others to see things through a different lens.
Helen Sitar is a Science Policy Programme Officer at EMBO and DORA’s Community Coordinator