The precise path taken to implementing DORA will depend on the history and organisational idiosyncrasies of each institution. Nevertheless, it is likely that the establishment of an internal working group or committee to consider how best to infuse the spirit of the declaration within the institution will be a sensible move in most cases. This is the approach we took at my university, Imperial College London, after we signed DORA in January 2017.
When scientists publish a journal article, they are doing more than just disseminating their work: they’re attaching it to a journal title that will, rightly or wrongly, telegraph signals about its quality. Preprints help to unbundle communication from the other functions of journal publishing, and they allow evaluators—funders, hiring committees, and potential mentors—to read a candidate’s most recent work.
Unfortunately, science is rife with examples where research assessment diminishes diversity. Hiring, promotion, and grant decisions are made with incomplete information that is also poorly predictive of success—the perfect conditions for bias to emerge.
Scientific societies have a key role to play in changing and improving assessment of researchers. Many are key publishers of quality content and many of their journals are recognized as such without the burden of journal impact factors. They also play key roles in shaping the scientific culture of disciplines, including around ethics, authorship, and outreach, including in discussions at meetings and in career workshops.
There is widespread recognition that the research culture in academia requires reform. Hypercompetitive vying for grant funding, prestigious publications, and job opportunities foster a toxic environment. Furthermore, it distracts from the core value of the scientific community, which is a principled search for increasingly accurate explanations of how the world works.
Having committed to the hiring and development of early career scientists, it is in the best interest of departments and institutions to make the tenure process as transparent and consistent as possible to ensure success. One mechanism to accomplish this is to allow untenured faculty to discuss and vote on the tenure files of more senior faculty members.
Faculty often cite concerns about promotion and tenure evaluations as important factors limiting their adoption of open access, open data, and other open scholarship practices. We began the review, promotion, and tenure (RPT) project in 2016 in an effort to better understand how faculty are being evaluated and where there might be opportunities for reform. We collected over 800 documents governing RPT processes from a representative sample of 129 universities in the U.S. and Canada.
Universities cannot achieve their missions and visions if their stated values are out of line with research assessment policies and practices. Although most university mission statements specify research, teaching, and public service as their central commitments, contributions to research are often valued at the expense of teaching and public service. How serious is this misalignment and what can be done about it?
Scientific societies have a vested interest in research assessment as standard bearers for their profession. We represent members who are at all career stages and in many different career paths. Societies have multiple roles in the assessment infrastructure. They publish scientific journals; they host large and small meetings; they provide professional development training; they give recognition through awards and fellowships; and they set standards for the profession. Collectively, this gives societies a variety of leverage points to affect change.
The conundrum is easy to understand: Conventional teaching assessments rely heavily on student feedback, which, whether through metrics or narrative comments, is often fraught with bias. It is even more difficult to assess teaching when done in “engaged” settings, not in the classroom (e.g., for medical schools, in association with patient care).