The COVID-19 pandemic has changed many facets of the research ecosystem, from interruptions in research and career advancement to research dissemination and evaluation. It has also revealed and accentuated flaws in academic recognition and reward systems. Traditional academic hiring, promotion, and tenure practices that prioritize publication quantity and journal metrics do not tend to reward the activities that have been revealed to be critical during the pandemic, like transparency, reproducibility, and data sharing.
From the public, to policy makers, to fellow researchers, the high level of public interest in COVID-19 data has spurred conversations about responsible use and increased recognition of data-sharing efforts. Resources have been established to track COVID-19-related retractions, to facilitate rapid review, and to study the responsible use of preprints in light of increased public attention. It is unknown how these efforts will change academic and funder reward systems. To help answer this question, DORA and the Hong Kong Principles for Assessing Researchers held a joint webinar on May 13, 2021, to better understand whether the pandemic might change how data is recognized and rewarded when assessing researchers.
In this webinar, a panel of experts explored the outcomes of making data available for re-use and openly sharing research products. The panelists included Olavo Amaral of the Instituto de Bioquímica Médica Leopoldo de Meis in the Universidade Federal do Rio de Janeiro, Brazil; David Carr of the Wellcome Trust, United Kingdom; Daniella Lowenberg of Make Data Count, United States; and Rhoda Wanyenze of Makerere University School of Public Health, Uganda. The panel was moderated by Anna Hatch, DORA’s Program Director, and David Moher of the Hong Kong Principles for Assessing Researchers.
What data-sharing challenges have been highlighted by the COVID-19 pandemic?
As an institutional leader, Wanyenze pointed out several challenges highlighted by the COVID-19 pandemic. For example, some authors indicate that data is available upon request but do not respond to requests for access. Additionally, available data are often aggregated, making analyses more difficult. Requiring data to be uploaded in a repository and formatted appropriately can help enable researchers to build on the work of others more efficiently.
From the perspective of a funder, Carr discussed how the pandemic has focused attention on the importance of openness and rapid sharing of academic findings via preprints, mentioning a large amount of support from both publishers and other funders. Despite the increase in preprints, Carr said it would be difficult to predict whether these changes will be sustained beyond the pandemic. Wellcome is working with recipients of COVID-19 research funding to put its commitment to responsible data sharing into practice. One challenge has been that the researchers who received COVID-19 funding lack the knowledge of best practices and resources, so Wellcome is developing guidance on good data stewardship and how to responsibly rapid-share findings.
Should data sharing be included in research assessment and, if so, how?
Lowenberg discussed the work at Make Data Count toward developing metrics to analyze the reach and impact of data sharing. She believes incentives are critical to improve data sharing. A key issue is the misalignment of reward systems and research practices: The actions that advance research (e.g., transparency, data sharing, development of data sharing infrastructure) do not necessarily lead to career progression under traditional academic assessment systems.
Amaral provided a researcher perspective. To begin, he highlighted the multiple ways in which data can be made accessible, and thus the multiple types of data sharing that could be rewarded in academic assessment processes: Raw data used to generate a manuscript, pure data that are not associated with a scientific paper may be shared in online repositories managed by institutions or outside organizations, and structured data that are uploaded into a data repository that enables public access to data (e.g., the Johns Hopkins COVID-19 map). Creating the basic infrastructure to share structured data with the public is time-consuming and difficult work that holds great societal value, but is not necessarily rewarded.
As an institutional leader, Wanyenze agreed that data sharing should be included in research assessment, giving the example of building the recognition of data sharing into faculty promotion policies. To this end, an institution must be incentivized to build the capacity to track and quantify data sharing. For example, an incentive may be that students and staff benefit from open sharing of data on an institutional data-sharing platform. However, this also assumes that the institution has the capacity to build and manage such a platform (i.e., budget, access to expertise). More specifically, although the prospect of increased access to data for students and institutional researchers may be an appealing incentive, the cost of such an endeavor may be prohibitive. Wanyenze pointed out that this is an important consideration in the global south and for all institutions that lack capacity. Institutions that lack the infrastructure and budget for data management must rely on those that do, and the resulting loss of data autonomy may lead to inequities in access. Wanyenze suggested research funders could implement policies and funding that incentivize data sharing across departments and institutions.
Has the pandemic changed how data sharing is valued and recognized in research assessment?
Giving a funder’s perspective, Carr said that an example of the direct impact of the pandemic on how data sharing is valued or recognized at Wellcome was the focus on data availability and other research outputs for COVID-19-related funding. Those who apply for funding are required to include an output management plan. Although the pandemic has increased the visibility of data sharing and recognition for new funding applications and awards, the extent to which a track-record of previous data sharing will be recognized is not yet known. Carr suggests that moves in the field toward a holistic approach to research assessment (e.g., recognizing a broader range of research outputs) are in support of pandemic-related changes.
Lowenberg pointed out that the pandemic has increased public awareness of the importance of data sharing, suggesting that the recognition of the value of data sharing by the broader public will continue to grow. However, the gap between acknowledging that data sharing is important and a willingness to invest in the infrastructure necessary for it to happen will slow progress. Lowenberg cautioned that it is important to consider what “value” means in different contexts, and that open sharing of data does not always translate to equitable sharing of data. For example, many open access journals require a fee for publication, which may be a barrier to researchers and institutions that cannot afford to pay. Lowenberg stressed that we cannot simply build incentive structures that only recognize a privileged subsection of data sharing.
What is the largest obstacle preventing organizations from taking action on data sharing?
The obstacle primarily highlighted by the panelists was the lack of examples of the benefit of change. At the institutional level, Amaral pointed out that there is a highly heterogeneous uptake of data-sharing policies across institutions, research departments, funding agencies, and publishers. Some universities have championed the recognition and support of data-sharing advances through the creation of infrastructure and recognition systems. Amaral suggested that we can make good use of this heterogeneity to highlight early adopters of these policies as examples. Documenting such examples can help institutions move from intention to action. As a start, case studies collected by DORA, the European University Association, and SPARC Europe examine institutional change efforts to improve academic assessment. The Hong Kong Principles are also curating examples of good assessment practices. Even so, more specific examples are needed to fully understand the benefits of data sharing and encourage other institutions to take action.
Haley Hazlett is DORA’s Policy Intern.