With at least four authorship credits on preprints and published articles, the artificial intelligence (AI) chatbot ChatGPT, which has caught the globe by storm, has made its formal debut in the scientific literature.
The appropriateness of citing the bot as an author and the presence of such AI tools in the published literature are currently topics of discussion among journal editors, academics, and publishers. Publishers are scrambling to develop standards for the chatbot, which was made available as a free tool by San Francisco, California-based software startup OpenAI in November.
Publishers and preprint servers that Nature’s news team contacted concur that ChatGPT and other AIs need to meet the requirements for research authors because they cannot be held accountable for the integrity and content of scientific studies. However, some publishers claim that acknowledging an AI’s contribution to a paper’s writing in places other than the author list is acceptable. (The news staff at Nature is editorially separate from the journal staff and its employer, Springer Nature.)
In one instance, an editor informed Nature that ChatGPT had been incorrectly listed as a co-author and that the publication would make the necessary corrections.
One of 12 writers on a preprint about using the technology for medical education that was published on the medical repository medRxiv in December of last year is an artificial author named ChatGPT.
According to co-founder Richard Sever, assistant director of Cold Spring Harbor Laboratory Press in New York, the team behind the repository and its sister site, bioRxiv, is debating whether it is appropriate to use and credit AI technologies like ChatGPT when authoring papers. The conventions could alter, he continues.
The formal authorship of an academic publication must be distinguished from the more broad definition of an author as a writer of a document, according to Sever. According to him, only people should be included because authors assume legal responsibility for their works. Of course, individuals may attempt to smuggle it in—this has already occurred at medRxiv—much as individuals have in the past put pets, fictional characters, etc. as authors on journal publications. However, this is more of a checking issue than a policy one. (A request for comment was not answered by Victor Tseng, the preprint’s co-author and the medical director of Ansible Health in Mountain View, California.)
This month’s editorial in the journal Nurse Education in Practice lists Siobhan O’Connor, a health technology researcher at the University of Manchester in the UK, and the AI as co-authors. The main editor of the journal, Roger Watson, claims that this credit was overlooked but will soon be fixed. Because editorials run through a separate management system than research papers, he claims, “it was an oversight on my side.
Additionally, ChatGPT was listed as a co-author of a perspective article in the journal Oncoscience last month, according to Alex Zhavoronkov, chief executive of Insilico Medicine, an AI-powered drug-discovery business based in Hong Kong. He claims that his firm has released over 80 papers made with generative AI technologies. We have experience in this field, he claims. The most recent study weighs the benefits and drawbacks of taking the medication rapamycin within the framework of Pascal’s wager. According to Zhavoronkov, ChatGPT produced a significantly better essay than earlier iterations of generative AI technologies.
He claims that he requested the editor of Oncoscience to conduct a peer review of this manuscript. Nature asked the journal for comments, but they didn’t get back to them.
According to co-author Almira Osmanovic Thunström, a neurobiologist at Sahlgrenska University Hospital in Gothenburg, Sweden, a fourth article, co-written by an earlier chatbot known as GPT-3 and posted on the French preprint server HAL in June 2022, will soon be published in a peer-reviewed journal. She claims that following the review, one publication rejected the work; but, after she revised it in response to reviewer demands, another journal accepted it with GPT-3 listed as an author.
The editors-in-chief of Nature and Science informed the news staff at Nature that ChatGPT did not adhere to the requirements for authorship. According to Magdalena Skipper, editor-in-chief of Nature in London, “An attribution of authorship carries with it accountability for the work, which cannot be properly applied to LLMs.” She advises authors who use LLMs in any form to write a manuscript to explicitly state their use in the methods or acknowledgments sections, as applicable.
Holden Thorp, editor-in-chief of the Science family of journals in Washington, DC, states that “we would not allow AI to be named as an author on a paper we published, and usage of AI-generated language without proper citation may be considered plagiarism.”
According to Sabina Alam, head of publishing ethics and integrity at Taylor & Francis in London, the publisher is currently examining its policies. She acknowledges that writers are accountable for the accuracy and reliability of their work and that any use of LLMs should be acknowledged. There haven’t been any submissions to Taylor & Francis yet where ChatGPT’s listing as an author.
According to scientific director Steinn Sigurdsson, an astronomer at Pennsylvania State University in University Park, the board of the physical sciences preprint server arXiv has held internal conversations and is starting to agree on a strategy for the employment of generative AIs. He acknowledges that, among other reasons, a software tool cannot be the author of a submission since it cannot approve of the conditions of use and the right to share content. There aren’t any arXiv preprints that mention ChatGPT as a co-author, according to Sigurdsson, who also promises that author guidance is on the way.
AI’s generative ethics:
According to Matt Hodgkinson, a research-integrity manager at the UK Research Integrity Office in London, who is speaking in his role, there are already explicit authorship criteria that state ChatGPT should not be included as a co-author. One requirement is that a co-author must make a “substantial scholarly contribution” to the publication; he suggests that tools like ChatGPT may make this possible. But it also needs to be able to accept a co-authorship and accept accountability for a study, or at least the portion to which it contributed. The idea of granting an AI tool co-authorship runs into trouble in the second half, according to him.
Zhavoronkov claims that his attempts to persuade ChatGPT to produce articles that were more technical than the viewpoint he published were unsuccessful. If you ask it the same question more than once, it will likely give you various answers, he claims. “It does quite frequently return the things that are not necessarily accurate.” Because those without subject-matter expertise would now be able to attempt to create scientific publications, “I will undoubtedly be concerned about the misuse of the system in academia.”