Editors of scientific journals are arbiters of research rigor and quality. Accordingly, we sought input from Editors, starting from journals that originate from learned scientific societies.
This personal perspective about AI was received from Seamus Martin, Editor in Chief of the FEBS Journal (FEBS is the Federation of European Biochemical Societies):
"From a scientific editor's perspective, we are carefully monitoring developments in this space for their potential impacts on data integrity, manuscript originality, confidentiality, and other aspects of the scientific publication process. A serious threat to the integrity of the scientific literature is the proliferation of 'paper mills' that will undoubtedly make use of LLMs and AI platforms to generate bogus scientific papers for financial gain.
I believe that this is potentially very dangerous technology that needs to be carefully regulated and monitored and kept in the public domain, insofar as possible. I think we need to be very careful about how AI is trained, deployed, and regulated and it is imperative that a serious discussion is had, at all levels of society, concerning how AI may impact human creativity, education, research, politics, communication, and other important spheres.
We should adopt a consensus AI model to train and use that is as open and transparent as possible. Such a platform should also create free tools that enable editors, educators, grant funding agencies and others to detect the use of AI in generating text and data using this platform. Outputs from AI platforms should also be quality-checked for validity and lack of bias.
We as scientists should preferentially interact with accredited AI platforms, which we agree upon, as we do with accredited scientific journals at present. I believe that it is imperative we know how AI systems generate their outputs, the source of the datasets they use, and that there are monitoring systems in place to ensure that such AI tools will not be used for destructive or divisive purposes, whether knowingly or not."
This comment was received from Alex Toker, Editor in Chief of the Journal of Biological Chemistry (published by the American Society for Biochemistry and Molecular Biology):
"At the JBC we have of course been thinking how to deal with the rise of AI in science, research and publishing. There are no easy solutions, and there is much to be gained and also be concerned with over the advance of generative AI. We recently opened a JBC editorial on the subject:
https://www.jbc.org/article/S0021-9258(23)02036-7/fulltext
"
This comment was received from Valda Vinson, Executive Editor of Science and of the science journals published by the American Association for the Advancements of Science (AAAS). We informed her about the ongoing discussion and asked about the implications of publishing AI papers from private companies, which might not always share all the details of their methods. We also asked her if the scientific community should encourage nonprofit alternatives and transparency in AI efforts so that fundamental scientific advances can be replicated independently. She pointed out that the academic and non-profit scientific sector can play an important role and can help define policy standards:
"Thank you for sharing this. We are navigating the challenges associated with developments in AI, with a goal of transparency and reproducibility. Any exceptions are carefully considered, including weighing the safety implications, the importance of the results to the scientific community and the extent to which any limit on transparency will impede reproducibility and building on the research. As this field evolves, we will also refine our procedures for considering exceptions. We are strongly supportive of AI research in the academic and non-profit sector and our policies will co-evolve with community standards."