close

AI tech like ChatGPT can be used to turbocharge fraud: FTC chair Lina Khan

Khan also warned that AI's ability to turbocharge fraud should be considered a "serious concern," reports TechCrunch

IANS San Francisco
tech

Listen to This Article

The US Federal Trade Commission (FTC) Chair Lina Khan has warned that modern AI technologies like ChatGPT can be used to "turbocharge" fraud.

In a Congressional hearing to protect consumers from fraud and other deceptive practices, Khan and fellow commissioners warned House representatives of the risks involved with AI technologies.

"AI presents a whole set of opportunities, but also presents a whole set of risks," Khan told the House representatives.

"I think we've already seen ways in which it could be used to turbocharge fraud and scams. We've been putting market participants on notice that instances in which AI tools are effectively being designed to deceive people can place them on the hook for FTC action,a she stated.

Khan also warned that AI's ability to turbocharge fraud should be considered a "serious concern," reports TechCrunch.

The agency launched a new Office of Technology (OT) in February with the goal of supporting the agency's law enforcement and policy work by offering in-house technical expertise.

Also Read

Microsoft 365 Copilot: Microsoft is bringing AI to its productivity apps

What can India gain from the trade pact with Australia?

Muraleedharan terms move to remove Kerala Guv as 'constitutionally invalid'

Sushmita Shukla appointed first VP, COO of Federal Reserve Bank of New York

Experts see 'a very hard road ahead' for China as Covid-19 cases spiral

Corporate funding in global solar sector grows 11% to $8.4 bn in Jan-Mar

Nepal Prez Paudel airlifted from Kathmandu, to be admitted to AIIMS

Latest LIVE: Cabinet briefing by Anurag Thakur, Jitendra Singh at 3 PM

Ukraine, World Bank agree on $6 bn recovery programme amid Russia war

Netflix grows engagement in India by 30% in Q1 2023 after price cuts

OpenAI's ChatGPT may lead to aid scammers and create new mobile threats.

AI-driven ChatGPT, that gives human-like answers to questions, is also being used by cyber criminals to develop malicious tools that can steal your data.

The first such instances of cybercriminals using ChatGPT to write malicious codes were recently spotted by Check Point Research (CPR) researchers.

In a bizarre incident, AI chatbot ChatGPT, as part of a research study, recently falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past.

Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realised ChatGPT named him as part of a research project on legal scholars who sexually harassed someone.

"ChatGPT recently issued a false story accusing me of sexually assaulting students," Turley had posted in a tweet.

Brian Hood, regional mayor of Hepburn Shire in Australia, also threatened to sue OpenAI if the Microsoft-owned company doesn't correct false information about him.

ChatGPT reportedly named Hood as a convicted criminal, involved in a past and real bribery scandal at Australia's Reserve Bank (RBA).

--IANS

na/ksk/

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

First Published: Apr 19 2023 | 2:24 PM IST

Explore News