ChatGPT Cannot Be Credited As An Author Says, Academic Publisher
Springer The largest academic publication in the world, Nature, has made clear its guidelines regarding the usage of AI writing programs in scientific papers. This week, the corporation said that programs like ChatGPT cannot be listed as authors in studies that are published in any of its hundreds of publications. Springer claims it has no issues with researchers employing AI to assist in writing or coming up with research ideas as long as the writers appropriately acknowledge this input.
Magdalena Skipper, editor-in-chief of Nature, the flagship journal of Springer Nature, tells The Verge, “We felt forced to explain our position: for our writers, for our editors, and for ourselves. “This new generation of LLM tools, including ChatGPT, has really taken off in the community, which is rightfully thrilled and playing with them, but [also] utilizing them in ways that go beyond how they can be utilized authentically at this time.”
A few papers, preprints, and scholarly publications have already been published using ChatGPT and prior large language models (LLMs) listed as authors. The kind and extent of these instruments’ contributions, however, differ from instance to situation.
The use of ChatGPT to support the usage of certain medications in the context of Pascal’s wager is made in one opinion piece that was published in the journal Oncoscience, with the AI-generated content being clearly identified. However, the sole acknowledgment of the bot’s participation in a preprint article assessing the bot’s capacity to pass the United States Medical Licensing Examination (USMLE) is a statement indicating the software “contributed to the composition of various portions of this publication.”
There are no more explanations of how or where ChatGPT was utilized to create text in the latter preprint publication. The writers were contacted by The Verge, but we didn’t get a response in time for publishing. The CEO of the healthcare firm Ansible Health, which provided funding for the study, said that the bot had made important contributions. According to Jack Po, CEO of Ansible Health, “We designated [ChatGPT] as an author because we feel it truly contributed intellectually to the substance of the study and not only as a topic for its assessment.”
The majority of responses to articles in the scientific community that list ChatGPT as an author has been unfavorable, with social media users labeling the USMLE case judgment as “absurd,” “silly,” and “truly foolish.”
According to Skipper and Springer Nature, one argument against granting AI authorship is that software just isn’t capable of doing the necessary tasks. “We don’t just think about writing them,” explains Skipper of the authorship of scientific and research publications. There are obligations that go beyond publishing, and these AI technologies are clearly not yet prepared to take on those obligations.
The software cannot be held responsible for publications in any meaningful way, cannot assert intellectual property rights over its work, and cannot communicate with other scientists or the media to discuss and reply to inquiries about its work.
The use of AI tools to produce a paper, even with proper attribution, is less clear, even if there is widespread agreement that AI should be given authorship credit. This is partly because of issues with these tools’ output that have been well-documented. AI writing software has the propensity to create “plausible bullshit,” or false material presented as reality, and can reinforce societal prejudices like sexism and racism.
Follow us on Instagram – here