- 1R&C Research, Bovezzo, Italy
- 2Fondazione Achille Sclavo Onlus, Siena, Italy
- 3Department of Surgical, Oncological and Gastroenterological Sciences (DiSCOG), Padova University, Padova, Italy
- 4Istituto Superiore di Sanità, Rome, Italy
- 5Clinical Pharmacy Unit, School of Pharmacy, Jimma University, Jimma, Ethiopia
- 6Delle Medical Center, Rome, Italy
- 7Polyclinic University Hospital, Palermo, Italy
Introduction
In December 2021, one of the authors of the present paper (AR) took part in the peer review of the paper “Safety and immunogenicity of an inactivated virus particle vaccine for SARS-CoV-2, BIV1-CovIran: findings from double-blind, randomized, placebo-controlled, phase I and II clinical trials among healthy adults” for the BMJ Open [1, 2]. The manuscript described clinical phases I and II of the COVID-19 vaccine BIV1-CovIran by Shifa Pharmed Industrial Group. The article was accepted for publication in March 2021 after three review rounds, with a total of six reviewers involved. On May 2022, AR received an email from Yeganeh Torbati, a Washington Post reporter who was investigating the development of BIV1-CovIran. Torbati asked AR for a general opinion about the data presented in the above article. AR replied that no serious anomalies were highlighted, although he specified that the peer review process was too superficial to guarantee complete integrity. Subsequently, through an article published in the Washington Post in August 2022, Torbati disclosed serious misconduct dynamics [3]. In support of her claims, an official correction was published in the BMJ Open in November 2022, in which the authors were forced to admit various conflicts of interest and the occurrence of vaccine-related adverse effects [1]. The relevant fact is that not even six peer reviewers and one editor have discovered such a hidden scenario. This is not intended to blame the journal or the reviewers but only to denounce that the world of scientific publication is currently subject to easy ethical violations. Although financial relationships can markedly bias biomedical research, marginal importance is given to this aspect [4, 5]. In this regard, this letter proposes a set of practices to counteract some major integrity problems.
What Can Authors Do?
A1. Authors should facilitate research reproducibility to boost peer-review speed and accuracy. This includes 1) sharing codes, calculations, and data (raw and elaborated), and 2) providing a step-by-step description of the ideas that led to the realization of the project, the implementation of the methods, and the procedures to assess the tests’ assumptions.
A2. Authors should adopt frameworks for enhancing quality in preclinical data since this can significantly increase transparency and trust in results and allow errors to be prevented rather than detected too late [6].
A3. Regarding clinical trials, authors should publicly share audit/monitoring documents, information about the contract research organization that monitored the study, and data submitted to regulatory agencies (at least on the clinical testing front, which does not seem to threaten intellectual property or industry secrets).
A4. Authors should release a preprint version so as to allow the scientific community to review the results independently and rapidly.
What Can Academic Journals Do?
J1. Journal editors should evaluate the paper’s health sensitivity and decide whether it is a high-sensitivity topic. All research involving novel drugs, vaccines, and therapeutic strategies should be considered at high sensitivity.
J2. For high-sensitivity topics, journals should compulsorily require A1–A4. Any draft version that has passed peer review should be released at the very moment of approval, explicitly indicating that it is an unedited peer-reviewed version. Reviewers’ reports and authors’ responses should always be published, ensuring easy citability (e.g., DOI). Reviewers’ names and affiliations should also be published unless they express reasonable fears for their safety. This should help reduce the problem of coercive citations [7]. Finally, journals should allow authors to publicly share editorial rejection decisions, including reviewers’ reports.
J3. For high-sensitivity topics, journal editors should stratify the peer review to ensure the validity of the key elements. Alongside a general assessment, each methodological aspect (e.g., design, population, data collection, statistical analysis, and results) should be carefully evaluated by one or more independent specific experts, especially when dealing with high-complexity data or multidisciplinary approaches. Regarding clinical trials, editors should involve expert figures to evaluate pharmacological and public health aspects (e.g., adverse reaction reports). Journals should also include a specific mandatory section in which reviewers declare the limitations of their review (e.g., “I’m not an expert on Bayesian methods”), so that editors and readers have a clear understanding of what the reviewers assessed. Finally, double-blind review should be required to reduce authorship bias [8].
J4. For high-sensitivity topics, journal editors should create a dedicated section made up of two or more journalists experienced in detecting ethical violations. Such supervision should extend to the authors but also the reviewers, who could voluntarily influence the publication process. The inquiry must only concern researchers’ professional relationships and activities, without affecting the private sphere, in order to safeguard their privacy. The academic journal should propose to the reporters to sign a non-disclosure agreement regarding the data found and guarantee the quality of the investigation.
J5. For high-sensitivity topics, journals should compulsorily require that the data are suitably standardized to allow decentralized analysis through automated tools, software, or artificial intelligence algorithms [6, 9]. Specific guidelines should be provided to help authors with the A1 point. Means for decentralized analysis should be provided to reviewers. Should a unique international standardization be chosen by regulatory agencies, journals would have to adhere to it.
J6. For high-sensitivity topics, journals should pay peer reviewers and editors. Indeed, paying peer reviewers—a sustainable practice, as shown by the editorial policies of various journals—would foster excellence thanks to an economic reward proportional to the reviewer’s skill (competition mechanism). One of the main obstacles to publication, namely, the difficulty in finding available reviewers, would be quickly overcome. Scientists could play this role on a permanent and ongoing basis thanks to the benefits of true job performance. Paid work would increase the actual responsibility of peer reviewers and editors.
What Can Abstracting Service Groups Do?
I1. Tiered indexing should be introduced by abstracting services. The top rank should only be granted to academic journals that meet J1–J6. Indeed, since indexing in recognized databases is a source of prestige (so much so that, in most cases, journals reserve a special section of their websites to this scope), doing so would drive health journals to adjust to the new standards. Moreover, this would help the public to identify the most authoritative and reliable journals. Similar initiatives are already underway [10].
What Can Regulatory Agencies, Funders, and Institutions Do?
R1. Funders and regulatory agencies should require necessary authors’ compliance with points A1–A3.
R2. Regulatory agencies should agree on a unique international data standardization (see point J5) so as to strengthen and accelerate scrutiny by the whole scientific community.
R3. Institutions and employers should actively encourage and support scientific refereeing. Moreover, funders should be willing to finance an extra amount to properly perform points J3, J4, and J6.
In conclusion, we do ask the scientific community to take a clear position and make itself heard with a stentorian voice to protect public health from ethical misconduct. This renewal would lead not only to direct benefits to the research but also to the public image of the whole scientific world, thanks to a novel, more transparent, efficient, and effective procedure of academic publication. We are aware that these guidelines are tailored to the medical field and that some of our requests could be not applicable or not stringent enough. Therefore, if needed, specific recommendations should be added or lifted based on the research field.
Author Contributions
AR conceived the paper, fixed the driving ideas of the novel review system, formed the research team, and drafted all the manuscript versions. RG provided relevant literature and stressed the need to 1) divide the review into modular steps of quality control, 2) standardize data (based on Amaral’s work) [6], 3) adopt data analysis automatic tools (based on Amaral’s work) [6], and 4) verify audit/monitoring documents and the CROs involved in the studies. She also actively participated in the drafting of the manuscript and selecting the relevant information. AV contributed to examining the feasibility and sustainability of the review method in light of the scientific community’s available resources and the epistemological scenario related to the current scientific publication system. In this regard, he also emphasized the need to make the concept of high-sensitivity topics exclusive to new therapeutic strategies. EM introduced and discussed the problem of high-complexity data and multidisciplinary approaches, and underlined the ethical need to determine the room for maneuver of fraud-busters journalists. He also examined the question of manipulating citations. Finally, he suggested that institutions actively encourage refereeing. BT proposed the participation of highly specialized public health and safety figures during the clinical data review process and helped in defining the role of fraud-busters journalists and drawing the conclusion section. PM and VA contributed to the discussion and critical assessment of the general validity of the proposed rules. VA also suggested increasing the transparency and credibility of the peer review process by disclosing the names and affiliations of the reviewers. All authors contributed to the article and approved the submitted version.
Conflict of Interest
The authors declare that they do not have any conflicts of interest.
Acknowledgments
We thank Yeganeh Torbati for her help, her commitment, and her dedication to the search for truth. Special thanks from AR. We also sincerely thank Olavo Amaral for his invaluable indications and suggestions.
References
1. Mohraz, M, Salehi, M, Tabarsi, P, Abbasi-Kangevari, M, Ghamari, SH, Ghasemi, E, et al. Safety and Immunogenicity of an Inactivated Virus Particle Vaccine for SARS-CoV-2, BIV1-CovIran: Findings from Double-Blind, Randomised, Placebo-Controlled, Phase I and II Clinical Trials Among Healthy Adults. BMJ Open (2022) 12(4):e056872. Erratum in: BMJ Open 2022 Nov 7;12(11):e056872corr1 PMID: 35396297; PMCID: PMC8995575. doi:10.1136/bmjopen-2021-056872
2.Peer Review History. Peer Review History of “Safety and Immunogenicity of a Novel Inactivated Virus Particle Vaccine for SARS-CoV-2, BIV1-CovIran: Findings from Double-Blind, Randomised, Placebo-Controlled, Phase I and II Clinical Trials Among Healthy Adults”. (2022). Available at: https://bmjopen.bmj.com/content/bmjopen/12/4/e056872.reviewer-comments.pdf (accessed Nov 13, 2022).
3. Torbati, Y. Amid Covid Surge, Iran Cut Corners to Approve Yet-Unproven Vaccine (2022). Available at: https://www.washingtonpost.com/world/2022/08/20/iran-covid-vaccine-approval/ (accessed Nov 13, 2022).
4. Bekelman, JE, Li, Y, and Gross, CP. Scope and Impact of Financial Conflicts of Interest in Biomedical Research: a Systematic Review. JAMA (2003) 289(4):454–65. PMID: 12533125. doi:10.1001/jama.289.4.454
5. Greenland, S. Accounting for Uncertainty about Investigator Bias: Disclosure Is Informative. J Epidemiol Community Health (2009) 63(8):593–8. PMID: 19596837. doi:10.1136/jech.2008.084913
6. Amaral, OB. To Fix Peer Review, Break it into Stages. Nature (2022) 611(7937):637. PMID: 36418447. doi:10.1038/d41586-022-03791-5
7. Singh Chawla, D. Elsevier Investigates Hundreds of Peer Reviewers for Manipulating Citations. Nature (2019) 573(7773):174. PMID: 31506633. doi:10.1038/d41586-019-02639-9
8. Jones, N. Authors' Names Have 'astonishing' Influence on Peer Reviewers. Nature (2022). [Epub ahead of print]. doi:10.1038/d41586-022-03256-9
9. Mrowinski, MJ, Fronczak, P, Fronczak, A, Ausloos, M, and Nedic, O. Artificial Intelligence in Peer Review: How Can Evolutionary Computation Support Journal Editors? PLoS One (2017) 12(9):e0184711. PMID: 28931033; PMCID: PMC5607159. doi:10.1371/journal.pone.0184711
10.Top Factor. Top Factor (2022). Available at: https://topfactor.org/ (accessed Dec 5, 2022).
Keywords: public health, ethics education in medicine and public health, peer review crisis, BIV1-CovIran, misconduct
Citation: Rovetta A, Garavaglia R, Vitale A, Meccia E, Terefe Tesfaye B, Mezzana P and Accurso V (2023) An Improved Peer-Review System to Compensate for Scientific Misconduct in Health-Sensitive Topics. Public Health Rev 44:1605601. doi: 10.3389/phrs.2023.1605601
Received: 18 November 2022; Accepted: 19 May 2023;
Published: 02 June 2023.
Edited by:
Sarah Mantwill, University of Lucerne, SwitzerlandCopyright © 2023 Rovetta, Garavaglia, Vitale, Meccia, Terefe Tesfaye, Mezzana and Accurso. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
PHR is edited by the Swiss School of Public Health (SSPH+) in a partnership with the Association of Schools of Public Health of the European Region (ASPHER)+
*Correspondence: Alessandro Rovetta, cm92ZXR0YS5tcmVzZWFyY2hAZ21haWwuY29t