The U.N. Security Council for the first time held a session on Tuesday on the threat that artificial intelligence poses to international peace and stability, and Secretary General António Guterres called for a global watchdog to oversee a new technology that has raised at least as many fears as hopes.
Mr. Guterres warned that A.I. may ease a path for criminals, terrorists and other actors intent on causing “death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale.”
The launch last year of ChatGPT — which can create texts from prompts, mimic voice and generate photos, illustrations and videos — has raised alarm about disinformation and manipulation.
On Tuesday, diplomats and leading experts in the field of A.I. laid out for the Security Council the risks and threats — along with the scientific and social benefits — of the new emerging technology. Much remains unknown about the technology even as its development speeds ahead, they said.
“It’s as though we are building engines without understanding the science of combustion,” said Jack Clark, co-founder of Anthropic, an A.I. safety research company. Private companies, he said, should not be the sole creators and regulators of A.I.
Mr. Guterres said a U.N. watchdog should act as a governing body to regulate, monitor and enforce A.I. regulations in much the same way that other agencies oversee aviation, climate and nuclear energy.
The proposed agency would consist of experts in the field who shared their expertise with governments and administrative agencies that might lack the technical know-how to address the threats of A.I.
But the prospect of a legally binding resolution about governing it remains distant. The majority of diplomats did, however, endorse the notion of a global governing mechanism and a set of international rules.
“No country will be untouched by A.I., so we must involve and engage the widest coalition of international actors from all sectors,” said Britain’s foreign secretary, James Cleverly, who presided over the meeting because Britain holds the rotating presidency of the Council this month.
Russia, departing from the majority view of the Council, expressed skepticism that enough was known about the risks of A.I. to raise it as a source of threats to global instability. And China’s ambassador to the United Nations, Zhang Jun, pushed back against the creation of a set of global laws and said that international regulatory bodies must be flexible enough to allow countries to develop their own rules.
The Chinese ambassador did say, however, that his country opposed the use of A.I. as a “means to create military hegemony or undermine the sovereignty of a country.”
The military use of autonomous weapons in the battlefield or in another country for assassinations, such as the satellite-controlled A.I. robot that Israel dispatched to Iran to kill a top nuclear scientist, Mohsen Fakhrizadeh, was also brought up.
Mr. Guterres said that the United Nations must come up with a legally binding agreement by 2026 banning the use of A.I. in automated weapons of war.
Prof. Rebecca Willett, director of A.I. at the Data Science Institute at the University of Chicago, said in an interview that in regulating the technology, it was important not to lose sight of the humans behind it.
The systems are not entirely autonomous, and the people who design them need to be held accountable, she said.
“This is one of the reasons that the U.N. is looking at this,” Professor Willett said. “There really needs to be international repercussions so that a company based in one country can’t destroy another country without violating international agreements. Real enforceable regulation can make things better and safer.”