Artificial Intelligence Joins UN Agenda as a Global Governance Priority

Artificial intelligence (AI) is set to join the list of urgent global challenges addressed at this week’s United Nations high-level meetings, as world leaders and diplomats advance plans for its governance amid growing concerns over safety, ethics, and accountability.

RELATED:

The United Nations is Useless, Trump Implies at the UN General Assembly

Since ChatGPT’s debut nearly three years ago, AI’s rapid expansion has drawn global attention. Developers race to advance the technology, while experts warn of risks including engineered pandemics, large-scale disinformation campaigns, and other existential threats. Previous multilateral efforts — including AI summits hosted by Britain, South Korea, and France — have produced only non-binding pledges.

Last month, the U.N. General Assembly adopted a landmark resolution creating two bodies for AI governance: a Global Forum and an independent scientific panel of experts. These institutions represent the largest and most formal multilateral attempt to date to establish a global framework for AI oversight.

On Wednesday, the U.N. Security Council will convene an open debate on AI governance, asking: “How can the Council help ensure the responsible application of AI to comply with international law and support peace processes and conflict prevention?”

On Thursday, Secretary-General António Guterres will launch the Global Dialogue on AI Governance — a platform for governments and stakeholders to share ideas and solutions. The Forum will meet formally in Geneva in 2026 and in New York in 2027.

Recruitment will begin for 40 members of the scientific panel, including two co-chairs — one from a developed country and one from a developing nation. Comparisons have been drawn with the U.N.’s Intergovernmental Panel on Climate Change and its annual COP meetings.

“This is a symbolic triumph,” said Isabella Wilkinson, research fellow at Chatham House. She added, however, that “in practice, the new mechanisms look like they will be mostly powerless,” questioning whether the U.N.’s process can match the speed of AI development.

Ahead of the meeting, a coalition of experts — including senior staff from OpenAI, DeepMind, and Anthropic — called for governments to adopt “red lines” for AI by the end of next year. They argue for “minimum guardrails” to prevent “the most urgent and unacceptable risks” and for an internationally binding agreement on AI, comparable to treaties banning nuclear testing and biological weapons.

“The idea is very simple,” said Stuart Russell, AI professor at the University of California, Berkeley. “As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access.” He proposed a “framework convention” flexible enough to adapt to AI’s rapid evolution and suggested a model similar to the International Civil Aviation Organization, ensuring global coordination and shared standards.

AI’s arrival on the U.N. agenda marks a critical step in defining its global governance, but questions remain whether the proposed mechanisms will deliver effective oversight before risks outpace regulation.

Autor