close

AI tools not far away from being scary, need to get them right: OpenAI CEO

The artificial general intelligence (AGI) comes with serious risk of misuse, drastic accidents, and societal disruption

IANS New Delhi
OpenAI, ChatGPT

OpenAI's ChatGPT

As ChatGPT takes the world by storm, its creator Sam Altman, CEO of OpenAI, has stressed that the world may not be "that far from potentially scary" artificial intelligence (AI) tools, and it's important that such AI chatbots are audited independently before they reach the masses.

The artificial general intelligence (AGI) comes with serious risk of misuse, drastic accidents, and societal disruption.

"Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right," Altman said in a blog post.

At some point, it may be important to get independent review before starting to train future systems, and "for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models."

"We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important. Finally, we think it's important that major world governments have insight about training runs above a certain scale," Altman elaborated.

Companies are currently using ChatGPT to write codes, copywriting and content creation, customer support and preparing meeting summaries.

Also Read

Microsoft may add OpenAI writing technology to Office, other apps

OpenAI launches ChatGPT Plus for $20 a month with more features

OpenAI launches new tool 'AI Text Classifier' to detect AI-generated text

OpenAI to soon monetise ChatGPT platform with a paid professional version

OpenAI announces ChatGPT chatbot: What is it, how it works, and limitations

Samsung Galaxy Z Fold 5 smartphone may not feature Chinese foldable panels

MediaTek may soon integrate Nvidia's AI GPUs in flagship mobile chips

Tech giant Google announces two-pane view for Gmail on Android foldables

AI vs humans: Some companies begin replacing employees with ChatGPT

Explained: How iPhone passcode is helping thieves steal your money and data

On the other hand, the general public is taking the help of AI chatbots to write essays, exams, poems and what not.

According to Altman, OpenAI wants to successfully navigate massive risks.

"In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimise "one shot to get it right" scenarios," he noted.

OpenAI is now working towards creating increasingly aligned and steerable models.

"Our shift from models like the first version of GPT-3 to InstructGPT and ChatGPT is an early example of this," he said.

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

First Published: Feb 26 2023 | 3:07 PM IST

Explore News