DORA stands for the San Francisco Declaration on Research Assessment. It originated in 2012 from a group of journal editors and publishers who recognized a pressing need to improve the ways research output is evaluated. The Declaration evolved into a global initiative established to advance practical and robust approaches to research and researcher assessment across all scholarly disciplines and worldwide. Read the Declaration, or visit our About page to learn more about DORA as an organization.
Research assessment is the process used to review research and researchers’ contributions, evaluating their quality, value, and relevance for decisions like hiring, promotion, and funding. DORA advocates for reform because the current system often relies on a narrow, simplistic set of evaluative metrics (indicators) that do not satisfactorily capture the quality, utility, integrity, and diversity of research. This misuse of metrics distorts incentives and creates unsustainable pressures on researchers. If you’d like to dive deeper into this, we recommend taking a look at our Introductory Course.
No, DORA does not advocate to eliminate all quantitative data. It is recommended that qualitative assessment should be supported by the responsible use of quantitative indicators, but not led by them. DORA and other research assessment initiatives call for quantitative indicators to be used transparently, contextually, and fairly. Inappropriate journal- and publication-based metrics should be avoided, as should the use of organization or department rankings or league tables. This approach has been called a movement towards “Responsible Research Assessment” (RRA).
We have produced guidance on the use of several indicators (sometimes called metrics) used in research assessment: the Journal Impact Factor and other measurements of journals, citation counts, h-index, field-normalized citation indicators, and altmetrics. Five principles guide the use of these metrics: be clear, be transparent, be specific, be contextual, and be fair.
Responsible Research Assessment (RRA) is an umbrella term for approaches to evaluation that incentivize, reflect, and reward the diverse and high-quality characteristics of research. The goal of RRA is to encourage assessment methods that focus on a holistic view of researchers, the research processes, outputs, outcomes, and societal impacts of research. RRA emphasizes transparent, open, and fair criteria. You can read more about it in this 2022 working paper.
RRA encourages organizations to recognize and reward diverse scholarly activities, beyond traditional peer-reviewed research articles. These valued contributions include, but are not limited to:
- Diverse research outputs like datasets, software, code, and protocols.
- Diverse contributions to the research community (e.g., peer review, mentoring, team science).
- Engagement activities, such as those that influence policy and practice or societal interactions.
- Activities supporting research integrity and reproducibility (e.g., transparent sharing of methods/results)
Revising assessment procedures must be a shared responsibility and requires a systems approach. Change must be collaborative, involving:
- Researchers, when serving in Committees, talking about their achievements, supervising other researchers, and reviewing proposals and applicants;
- Research Performing Organizations (RPOs), such as universities and research centers, when defining policies and procedures for hiring, promotion, tenure, prizes, awards, evaluating departments and communicating about research;
- Research Funding Organizations (RFOs), when establishing funding programmes, making funding decisions, setting grant conditions, and monitoring and evaluating their activities;
- Learned societies and policymakers, when defining quality and excellence in their respective knowledge and innovation disciplines and systems.
RRA is deeply interconnected with, and supports, broader efforts to improve academic culture. For example, RRA aligns with Open Science by valuing diverse outputs (like datasets and code), fostering transparent practices, and recognizing the time and effort required for openness. Also, RRA upholds integrity by focusing on rigor, ethics, and reproducibility in designing and performing research, rewarding transparent reporting of methods and results (including null results), and considering the Hong Kong Principles for assessing researchers.
The JIF is an indicator defined as the annual average number of citations to papers in any given journal in the two preceding years. DORA recommends the need to eliminate the use of journal-based metrics, such as the JIF, in funding, appointment, and promotion considerations. The JIF is fundamentally a measure of the journal, not the quality of the scientific content of an individual research article or scientist’s contribution. You can review a few articles about it:
DORA and the RRA movement encourage that we focus on the intrinsic merit of the work, rather than relying solely on the journal title or JIF. Assessment should focus primarily on qualitative evaluation, where peer review and expert judgment are central. This qualitative approach can be complemented by the responsible use of quantitative indicators. Institutions should strive to recognize and reward a broad range of contributions and outputs, moving away from a narrow focus on quantity to valuing quality.
DORA provides a range of resources, tools and frameworks, and supports the scholarly community through activities focused on community engagement, shared learning, resource development, partnership, advising, and convening. It also supports communities with awareness raising of RRA, including in multiple languages and focussed on different regional contexts.
- Course: DORA’s self-paced introductory course on Responsible Research Assessment offers foundational knowledge, practical tools, and real-world examples to help individuals across the research ecosystem understand and apply RRA principles.
- Resources: DORA creates and curates collections of good practices, tools, and frameworks accessible through our Resource Library. We also maintain a repository of Case Studies highlighting how various organizations are implementing change. Finally, we also developed Reformscape, a catalogue of criteria and standards academic institutions use for hiring, review, promotion, and tenure.
- Blog: The DORA Blog offers timely insights and reflections on research assessment reform and related movements such as open science and reforming scholarly publishing. It features updates from the community, practical examples of policy change, and news and announcements for those navigating their commitment to DORA principles.
- Communities: DORA fosters communities of practice, such as the Funder Discussion Groups, to increase communication and knowledge-sharing about new policies and practices. It also convenes key parties within focussed regional/national contexts to raise awareness of RRA and support the exploration of how to accelerate progress towards RRA within that community.
DORA welcomes signatures from both individuals and organizations. Organizational signatories may include scholarly societies, publishers, institutions, funders, and metrics providers – any entity involved in research assessment. Learn more and sign here.
Signing DORA is a public commitment. As of January 1, 2025, organizations signing the Declaration must submit a link to their public statement outlining their commitment to implementing research assessment reform in order to be approved. Signing should be viewed as only a potential first step; organizations must then develop concrete plans to improve their assessment policies and practices. Learn more here.
Yes, absolutely. DORA is not a single organization working in isolation; it is a core part of a much larger, global movement striving to modernize the research system. Several major organizations and consortia work specifically to coordinate and accelerate research assessment reform internationally:
- Latin American Forum on Research Assessment (FOLEC-CLACSO): Since 2019, FOLEC-CLACSO promotes regional guidelines and action to strengthen and develop research assessment in the Latin American region, advocating for open and public knowledge models.
- Coalition for Advancing Research Assessment (CoARA): Approved in 2022, CoARA Signatories undertake to commit resources to improve research evaluation and develop Action Plans that are then publicly shared. As of early 2025, the Coalition has over 800 signatories to its underpinning agreement.
- Global Research Council (GRC) Working Group on Responsible Research Assessment: This association of national public research funding organizations has an RRA initiative that guides its participants to implement RRA principles in their own practices globally.
Several countries also have national programs uniting funders and institutions, such as The Dutch Recognition & Rewards Programme. You can read more about this in the Global Young Academy report The Future of Research Evaluation.