UN urges global action on AI's potential and risks
A man faces the realistic artist robot "Ai-Da" using artificial intelligence at a stand during the International Telecommunication Union (ITU) AI for Good Global Summit, Geneva, Switzerland, May 30, 2024. (AFP Photo)

Amid the rapid evolution of artificial intelligence, a U.N. spokesperson underscored the urgent need for responsible governance to steer its course toward collective prosperity



Humanity is in a race against time to harness the colossal emerging power of artificial intelligence for the good of all, while averting dire risks, a top U.N. official said.

"We've let the genie out of the bottle," said Doreen Bogdan-Martin, head of the United Nations' International Telecommunications Union (ITU).

"We are in a race against time," she told the opening of a two-day AI for Good Global Summit in Geneva.

"Recent developments in AI have been nothing short of extraordinary."

The thousands gathered at the conference heard how advances in generative AI are already speeding up efforts to solve some of the world's most pressing problems, such as climate change, hunger and social care.

"I believe we have a once-in-a-generation opportunity to guide AI to benefit all the world's people," Bogdan-Martin told Agence France-Presse (AFP) ahead of the summit.

But she lamented Thursday that one-third of humanity still remains completely offline and is "excluded from the AI revolution without a voice."

"This digital and technological divide is no longer acceptable."

Bogdan-Martin highlighted that AI holds "immense potential for both good and bad," stressing that it was vital to "make AI systems safe."

Concentrated power

She said that was especially important given that "2024 is the biggest election year in history," with votes in dozens of countries, including the United States.

She flagged the "rise of sophisticated deep fakes disinformation campaigns" and warned that the "misuse of AI threatens democracy (and) also endangers young people's mental health and compromises cybersecurity."

Other experts at Thursday's conference agreed.

"We have to understand what we're steering toward," said Tristan Harris, a technology ethicist who co-founded the Center for Humane Technology.

He pointed to lessons from social media – initially touted as a way to connect people and give everyone a voice, but which also brought addiction, viral misinformation, online harassment and ballooning mental health issues.

Harris warned the incentive driving the companies rolling out the technology risked dramatically swelling such negative impacts.

"The number one thing that is driving Open AI or Google behavior is the race to actually achieve market dominance," he said.

In such a world, he said, "governance that moves at the speed of technology" is vital.

Changing social contract

OpenAI chief Sam Altman, who rose to global prominence after OpenAI released ChatGPT in 2022, acknowledged the dangers.

Speaking via video link, he told the gathering that "cybersecurity" was currently the biggest concern when it came to negative impacts of the technology.

Further down the road, he said there would likely "be some change required to the social contract, given how powerful we expect this technology to be".

"I'm not a believer that there won't be any jobs. But I do think the whole structure of society itself will be (open to) some degree of debate and reconfiguration."

Overall though, he insisted that from the perspective of how new technologies evolve historically, the AI systems were "generally considered safe and robust."

While welcoming discussions around regulations to stem short-term negative impacts of AI, he warned that it was "difficult" to suggest regulations aimed at reining in future impacts.

"We don't know how society and this technology are going to co-evolve," he said.

Bogdan-Martin meanwhile hailed that governments and others had recently "raced to establish protections" and regulations around the use of AI.

On Wednesday the European Union announced the creation of an AI Office to regulate artificial intelligence under a sweeping new law.

"It's our responsibility to write the next chapter in the great story of humanity and technology, and to make it safe, to make it inclusive and to make it sustainable," Bogdan-Martin said.