close

ChatGPT can't be credited as an author on research papers: Springer Nature

According to Springer Nature, the world's largest academic publisher, softwares like ChatGPT can't be credited as an author in papers published in its journals

IANS New Delhi
OpenAI, ChatGPT

OpenAI's ChatGPT

AI tools such as ChatGPT threaten transparent science, according to Springer Nature, the world's largest academic publisher, which has laid down ground rules for its use, saying software like ChatGPT can't be credited as an author in papers published in its journals.

First, no large language models (LLMs) tool will be accepted as a credited author on a research paper.

"That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility," said Nature in an article.

Second, researchers using LLM tools or AI chatbots should document the use in the methods or acknowledgements sections.

"If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM," said the publisher.

The AI chatbot ChatGPT has brought the capabilities of such tools, known as LLMs, to a mass audience.

Also Read

AI can revolutionise medical research, but a cautious approach is needed

Research shows how SARS-CoV-2 virus creates severe Covid-19 causing toxin

AI language models show bias against people with disabilities: Study

AI language processors show bias against people with disabilities: Study

Jaipur Literature Festival to be held from Jan 19 to 23 at Hotel Clarks

Music streaming platform Spotify's website, app restored after brief outage

Global smartphone shipments decline more than 18% in holiday quarter

WhatsApp may introduce 3 new features for text editor in drawing tool

Microsoft partners with global solar leader Qcells to curb carbon emissions

Google Search now allows US car dealerships to show vehicle inventory

ChatGPT can write presentable student essays, summarize research papers, answer questions well enough to pass medical exams and generate helpful computer code.

It has produced research abstracts good enough that scientists found it hard to spot that a computer had written them.

"Worryingly for society, it could also make spam, ransomware and other malicious outputs easier to produce. Although OpenAI has tried to put guard rails on what the chatbot will do, users are already finding ways around them," said the report.

That is why Nature is setting out these principles.

"Ultimately, research must have transparency in methods, and integrity and truth from authors. This is, after all, the foundation that science relies on to advance," the report mentioned.

--IANS

na/ksk/

 

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

First Published: Jan 27 2023 | 12:26 PM IST

Explore News