Introduction

On 18 February 2020, an Opinion document jointly released by the Ministry of Education and Ministry of Science and Technology of China triggered widespread and extensive debates among Chinese scientists (henceforth the Opinion) (Ministry of Education and Ministry of Science and Technology, 2020; Mallapaty, 2020). In the Opinion the two ministries that steer the direction of scientific research and education in China officially expressed the discouragement of long-accepted practice of using the Science Citation Index (SCI) as the criteria for research assessment. It mandates that SCI-related indicators (such as numbers of publications published in SCI-indexed journals, impact factors of the journals, and numbers of citations of published works) should not be seen as direct evidence of scientific merit, that benchmarking activities such as ranking universities and departments need to be reduced, and that peer-review needs to be expanded and strengthened to give more voice to expert opinions instead of metrics (Ministry of Education and Ministry of Science and Technology, 2020; Mallapaty, 2020). This opinion is the culmination of a series of official notices and instructions issued in recent years aiming at the reform and optimization of research evaluation in China (Chinese Communist Party Central Committee and State Council, 2018). It endeavors, foremost, to put a brake to the obsession with SCI papers and impact factors in the current practices (Fig. 1).

Fig. 1: China’s presence in SCI journals has been steadily increasing 2010–2019.
figure 1

The yellow line indicates the percentage of contributions from China among SCI-indexed research outputs. The blue bars indicate the annual numbers of SCI-indexed papers from China.

The dilemmas of SCI-based research assessment in China

To our best knowledge, the Chinese government never officially imposed the use of SCI in research assessment. Nonetheless, since its introduction to China in the 1980s, SCI has been used as an authoritative framework for evaluating research output and played a vital role in deciding scientists’ chances of promotion and access to funding, accolade, even cash rewards from their employers. The reasons for universities, funding agencies and scientific communities in China to rely on SCI are mainly threefold: (a) to encourage Chinese scientists to produce work that meet international standards of excellence; (b) to prompt Chinese scientists to keep up to date with the latest scientific frontiers and cutting-edge research areas; and (c) to instate an objective and quantifiable set of criteria in practices of evaluation. Such objectives have been achieved to certain extents, but severe side effects do exist. The first set of problems is related to the conduct of research per se, while the second and deeper reason probes into wider social controversy around the politics of knowledge production, i.e., concerns over what system of research assessment is best suited to social development and wellbeing in China.

SCI is not without shortcomings: a journal’s impact factor does not necessarily reflect its scientific value, for high-quality papers do not always appear in high-impact journals. Also, high-quality research may not immediately attract a lot of citations, and impact factors and citational patterns vary greatly across different disciplines. This is true to a variety of other citation metrics as well (Van Noorden, 2010; Larsen and von Ins, 2010; Harzing, 2011). In China, exclusive focus on impact factors and citation figures has in many ways deflected funding agencies’ attention from real breakthroughs and innovations. The emphasis on the number of publications also leads some researchers to pursue quantity at the expense of quality, resulting in lots of papers published with mediocre scientific value (Sahel, 2011). Assessment based on SCI also marginalizes high-quality research published in non-SCI journals, including home-grown Chinese language journals. The Opinion, thus, requires that the numbers of publications in SCI journals and the journals where papers are published cannot be used as direct indicators of the quality of research.

Metrics may also incentivize (mis)conduct that diverge from standards of academic integrity, honesty and rigor (Tang, 2019). Forms of misconduct are not restricted to plagiarism and fraudulence in papers. For example, in 2017, Tumor Biology published by Springer retracted 107 papers, all of which are by Chinese authors, because authors provided made-up contact information of potential reviewers, and the review processes ended up being manipulated by third-party agencies that make profit from “faking” the review processes (Chen, 2017). Also, certain journals have in recent years witnessed a concentration of works by authors from China. We suspect that this is because of closely knit networks of editors, reviewers and authors, which results in superficial peer-review, easy acceptance, and deliberate self-citing from the same journals to boost impact factors (see Guglielmi, 2019 for similar patterns of behaviors occurring to Italian scientists).

These points refer to concerns over quality of research and conduct of researchers, which are corroborated by theoretical discussions of citation metrics. However, bibliometrics literatures offer little insight on a deeper reason that has driven the Chinese government to adopt a more cautious stance towards SCI. We surmise that it is related to concerns over the social relevance of science, a notion that scientific communities around the world are jostling with (Frodeman and Holbrook, 2011). The promotion of SCI, which started in the 1980s, ushered in the introduction of a new system of research areas, questions and paradigms, a system esteemed as international and global. Also, SCI values knowledge as crystallized in a written form but does not directly measure the social wellbeing and benefits that knowledge generates. In parallel, an older and local paradigm of research in China emphasized “knowledge in practice”, namely that the priority of research was to address the needs of social wellbeing and development. It is not to say that the two paradigms had no overlap, or that SCI does not allow space for research that tackles urgent needs in China. However, discrepancies and conflicts are manifested in a number of ways.

For example, the emphasis on SCI publications may encourage scientists to spend most of their time writing but discourage them from applying research findings to deal with real-world problems. Also, research that stimulate new technologies, new products or good policies in the Chinese context may not belong to a cutting-edge area in the SCI context, thus finding it difficult to make it into the pages of SCI journals. As a result, scientists who focus on the social value of research may not be properly rewarded by the system. Moreover, some fields of knowledge, such as Chinese traditional medicine, would face disproportionate difficulties to fit with international systems of knowledge. Finally, given that SCI papers are predominantly published in English and access is conditioned on costly subscriptions, particular difficulties exist for members of the Chinese society to access research findings in SCI journals, even if they are funded by Chinese taxpayers (without considering legally gray channels such as SCI-Hub and Library Genesis). This may severely constrain the ability of SCI papers to stimulate entrepreneurship, innovation and public policy in China.

As a consequence, the Opinion prescribes that when applied research and technological innovation are assessed, research publications should not serve as the sole basis of evaluation but be combined with contribution of the research to technical solutions, new products and new technologies. The timing of the Opinion appears to echo another official instruction issued by the Ministry of Science and Technology, which orders that research related to the recent outbreak of the novel coronavirus disease (COVID-19) should be primarily directed towards containing the virus and defeating the disease, instead of publishing papers (Ministry of Science and Technology, 2020a). The latter instruction was an immediate response to flooding criticisms on the Internet, accusing some scientists of being more interested in publishing papers than releasing knowledge to the public for better combating the virus and controlling the diseaseFootnote 1.

Ways ahead and beyond SCI

Chinese scientists have welcomed the opinion and commended it as timely and necessary, as was expressed in social media, but felt that the importance of SCI should not be neglected or underplayed by universities and funding agencies. Despite that SCI is internally uneven, it still provides a basically reliable reflection of the best-quality knowledge and research across the world. In fact, SCI is likely to be fairer, more transparent and more accurate than a home-made assessment method (Harzing, 2011). This is especially so for scientists who are early career, less resourceful or do not have a well-established pedigree. It is thus vital for Chinese scientists to continue to participate in international publication and collaboration and keep informed about the cutting-edge research areas globally, although a shift towards quality instead of quantity is a wise move in our view.

Meanwhile, as a recent editorial in Nature suggests, the Chinese government hopes that non-SCI-centered research assessment would help expand the domestic publishing industry, within which many journals would be published in English to foster dissemination (Ministry of Science and Technology, 2020b). It appears that the government would like to oversee the growth of an alternative system of scientific knowledge, one that is more accessible and relevant to China, and where Chinese scientists have more power in defining hot zones of research. The number of papers published in these journals is likely to increase following the state’s mandate, but there is still a long way to go before the government’s ambition is realized. Domestic journals may give rise to a new system of metrics and are not necessarily relevant to the needs of the society, if specific criteria are not in place to recognize social relevance as a key component of scientific merit. Challenges also lie ahead as to what standards and norms of editorial processes, peer-review and quality control are to be implemented, so that re-animated interest in the domestic publishing market does not lead to compromised levels of research excellence in China.

Given that the government’s new stance towards SCI has created more questions that it has solved, we conclude this commentary by proposing several recommendations on how to create a less dogmatic and more flexible environment of research assessment in China, which avoids the pitfalls mentioned earlier but by no means isolates Chinese scientists from international standards of excellence and quality. First, we recommend the establishment of a diversified and elastic framework of research assessment in China. It combines metrics with qualitative, expert-opinion-based assessment of the quality and innovativeness of research. Even the use of SCI metrics can be made more flexible. For example, SCI is more helpful when comparing scientists within the same disciplines than between different ones (note that there is the need for metrics, SCI included, to normalize across different fields, see ref. 4), and funding agencies can allow a time window for citations to emerge instead of assessing freshly published papers. Also, the system must give more weight to contribution to social wellbeing. This can be measured in various forms such as relevance of research to economic, social and technological needs of the society and reports on the direct application of scientific knowledge. In fact, China’s dilemmatic relationship with SCI is by no means unique, and there are plenty of international experiences that can be consulted by the Chinese government. For example, the Research Assessment Exercise (RAE) in the UK has already moved towards more emphasis on expert review and social impact. The Chinese government also needs to make policies to ensure that research funded by Chinese taxpayers is made accessible to the public through an open-access mandate, as is widely practiced in European and North American countries.

Second, the government needs to give more space to scientists to evaluate their peers instead of imposing standardized and rigid guidelines. As of February 2020, the Ministry of Science and Technology has already issued another notice and specified three types of “high-quality papers”: papers published in domestic journals with international reputation; papers published in top or important international journals; and papers included in top international conferences. It also prescribes that research output with value in application can be given an extra weight of 10–50% in assessment (14). These guidelines create more questions than solutions: how to define the international reputation of domestic journals; how to define top or important international journals; which conferences are categorized as top conferences; whether papers published in such journals or venues are all of high quality; how the 10–50% extra weight for applied research is justified? Just like the dogmatic use of SCI-related metrics, state guidelines are most likely to result in rigid definitions and criteria, and worse still, the dogmatic use of them in universities and state-led funding bodies. The government should leave scientists to decide what constitutes quality, innovation and social impact in their respective disciplinary contexts, and which kinds of indicators and criteria (including but not restricted to SCI-related ones) they would like to draw on.

Finally, given the great emphasis that the new policies place on peer-review and evaluation, an immense challenge lies before the Chinese government and scientists as to how to maintain the quality and integrity of peer-review. In congruence with international practice, funding agencies and journals in China do require reviewers to report conflict of interest and personal relationships. However, whether this has been strictly enforced is followed by a question mark. Also, the social and cultural order in China is built on intersecting guanxi networks. It is likely that peer-review is co-opted by these networks and degenerate into rent-seeking activities, which compromises objectivity and impartiality expected of it. We suggest that universities and funding agencies in China need to treat misconduct in peer-review as a specific form of scientific misconduct. Wrongdoings such as violation of anonymity, acceptance of advantage offered by a reviewee, failure to report conflict of interest, etc. needs to be properly reported, investigated and penalized. Also, peer-review in China needs to involve international reviewers in addition to domestic ones. This is to reduce the influence of personal favor on review processes and help ensure that scientists within and outside China dialog over shared scientific concerns and research questions.