Having committed to the hiring and development of early career scientists, it is in the best interest of departments and institutions to make the tenure process as transparent and consistent as possible to ensure success. One mechanism to accomplish this is to allow untenured faculty to discuss and vote on the tenure files of more senior faculty members.
Faculty often cite concerns about promotion and tenure evaluations as important factors limiting their adoption of open access, open data, and other open scholarship practices. We began the review, promotion, and tenure (RPT) project in 2016 in an effort to better understand how faculty are being evaluated and where there might be opportunities for reform. We collected over 800 documents governing RPT processes from a representative sample of 129 universities in the U.S. and Canada.
Universities cannot achieve their missions and visions if their stated values are out of line with research assessment policies and practices. Although most university mission statements specify research, teaching, and public service as their central commitments, contributions to research are often valued at the expense of teaching and public service. How serious is this misalignment and what can be done about it?
Scientific societies have a vested interest in research assessment as standard bearers for their profession. We represent members who are at all career stages and in many different career paths. Societies have multiple roles in the assessment infrastructure. They publish scientific journals; they host large and small meetings; they provide professional development training; they give recognition through awards and fellowships; and they set standards for the profession. Collectively, this gives societies a variety of leverage points to affect change.
The conundrum is easy to understand: Conventional teaching assessments rely heavily on student feedback, which, whether through metrics or narrative comments, is often fraught with bias. It is even more difficult to assess teaching when done in “engaged” settings, not in the classroom (e.g., for medical schools, in association with patient care).
Research assessment can be extremely useful as a tool to evaluate the quality and the impact of the activities needed to achieve the Sustainable Development Goals (SDGs). The SDGs, adopted in 2015 by the General Assembly of the United Nations, envision that by the year 2030 the world could be transformed.
Current discourse on research assessment places high emphasis on “impact.” However, there are many different concepts of impact, and many different concepts of how research achieves impact. The resulting ambiguity and confusion confound efforts to improve research assessment. To reliably assess research, we need clarity about what it is we want to assess.
Human-centered design is well-positioned to supplement the ongoing activity of sharing best practices and specific, successful examples of new research assessment strategies, contributing a deep understanding what matters to individuals and entities, and a perspective on realigning incentives, social norms, and points of leverage where we might redefine and reward what’s valued in the future.
Much of the emphasis of DORA’s initiatives has revolved around appropriate metrics and assessments. Equally important is designing mechanisms that employ those assessments at decision-making steps.
On most campuses, the coalition (faculty, research officers, administrators, librarians, and department chairs) needed to move the needle on research assessment reform has yet to come together. Librarians have every reason to take an active role in help making this happen.