Diane Bartz & Jeffrey Dastin
US lawmakers are grappling with what guardrails to put around burgeoning artificial intelligence, but months after ChatGPT got Washington’s attention, consensus is far from certain.
Interviews with a US senator, congressional staffers, AI firms and interest groups show there are a number of options under discussion.
Some proposals focus on AI that may put people’s lives or livelihoods at risk, like in medicine and finance. Other possibilities include rules to ensure AI isn’t used to discriminate or violate someone’s civil rights.
Another debate is whether to regulate the developer of AI or the company that uses it to interact with consumers. And OpenAI, the startup behind ChatGPT, has discussed a standalone AI regulator. It’s uncertain which approaches will win out, but some in the business community, including IBM and the US Chamber of Commerce, favor the approach that only regulates critical areas like medical diagnoses, which they call a risk-based approach.
If Congress decides new laws are necessary, the US Chamber’s AI Commission advocates that “risk be determined by impact to individuals,” said Jordan Crenshaw of the Chamber’s Technology Engagement Center.
Artificial Intelligence and the art of winning in this brave new world
New artificial Intelligence tool to turn human brain activity into text
Owner Jeff Bezos makes a rare visit at Washington Post amid layoffs fears
Washington state bans assault weapons, introduces two other major gun laws
Chinese spy balloon capable of gathering intelligence, claim US officials
Pakistan's Lower House unanimously adopts motion against chief justice
What lies ahead for Thailand after dramatic opposition election win?
Turkey faces election runoff, Recep Tayyip Erdogan seen with momentum
Next round of talks for India-UK trade agreement to be held in June
With Turkiye's presidential election going to a runoff, what comes next?
The AI hype has led to a flurry of meetings, including a White House visit this month by the CEOs of OpenAI, its backer Microsoft, and Alphabet President Joe Biden met with the CEOs.
Congress is similarly engaged, say congressional aides and tech experts.
“Staff broadly across the House and the Senate have basically woken up and are all being asked to get their arms around this,” said Jack Clark, co-founder of AI startup Anthropic. “People want to get ahead of AI, partly because they feel like they didn’t get ahead of social media.” As lawmakers get up to speed, Big Tech’s main priority is to push against “premature
overreaction,” said Adam Kovacevich, head of the pro-tech Chamber of Progress.
And while lawmakers like Senate Majority Leader Chuck Schumer are determined to tackle AI issues in a bipartisan way, the fact is Congress is polarised, a Presidential election is next year, and lawmakers are addressing issues, like raising the debt ceiling.
The risk-based approach means AI used to diagnose cancer, for example, would be scrutinised by the Food and Drug Administration, while AI for entertainment would not be regulated. The European Union has moved toward passing similar rules.
But the focus on risks seems insufficient to Democratic Senator Michael Bennet, who introduced a bill calling for a government AI task force. Risk-based rules may be too rigid and fail to pick up dangers like AI’s use to recommend videos that promote white supremacy.
Legislators have also discussed how best to ensure AI is not used to racially discriminate, perhaps in deciding who gets a low-interest mortgage, a source said.
At OpenAI, staff have eyed broader oversight
Cullen O’Keefe, an OpenAI research scientist, proposed the creation of an agency that would mandate that firms obtain licenses before training powerful AI models or operating the data centers that facilitate them.
Some Republicans may balk at any AI regulation.
“We should be careful that AI regulatory proposals don’t become the mechanism for government micromanagement of computer code like search engines and algorithms,” a Senate Republican aide said. reuters