Summary
The High-Level Expert Panel on Artificial General Intelligence (AGI), convened by the UN Council of Presidents of the General Assembly (UNCPGA), has released its final report titled “Governance of the Transition to Artificial General Intelligence (AGI) Urgent Considerations for the UN General Assembly” outlining recommendations for global governance of AGI.
The panel, chaired by Jerome Glenn, CEO of The Millennium Project, includes leading international experts, such as Renan Araujo (Brazil), Yoshua Bengio (Canada), Joon Ho Kwak (Republic of Korea), Lan Xue (China), Stuart Russell (UK and USA), Jaan Tallinn (Estonia), Mariana Todorova (Bulgaria Node Chair), and José Jaime Villalobos (Costa Rica), and offers a framework for UN action on this emerging field.
The report has been formally submitted to the President of the General Assembly, and discussions are underway regarding its implementation. While official UN briefings are expected in the coming months, the report is being shared now to encourage early engagement.
Source: Millennium Project
OnAir Post: UNCPGA report on AGI Governance
News
The High-Level Expert Panel on Artificial General Intelligence (AGI), convened by the UN Council of Presidents of the General Assembly (UNCPGA), has released its final report titled “Governance of the Transition to Artificial General Intelligence (AGI) Urgent Considerations for the UN General Assembly” outlining recommendations for global governance of AGI.
The panel, chaired by Jerome Glenn, CEO of The Millennium Project, includes leading international experts, such as Renan Araujo (Brazil), Yoshua Bengio (Canada), Joon Ho Kwak (Republic of Korea), Lan Xue (China), Stuart Russell (UK and USA), Jaan Tallinn (Estonia), Mariana Todorova (Bulgaria Node Chair), and José Jaime Villalobos (Costa Rica), and offers a framework for UN action on this emerging field.
The report has been formally submitted to the President of the General Assembly, and discussions are underway regarding its implementation. While official UN briefings are expected in the coming months, the report is being shared now to encourage early engagement.
While today’s Artificial Narrow Intelligence (ANI) tools have limited purposes like diagnosing illness or driving a car, if managed well, Artificial General Intelligence (AGI), could usher in great advances in human condition encompassing the fields of medicine, education, longevity, turning around global warming, scientific advancements, and creating a more peaceful world. However, if left unbridled, AGI also has the potential to end human civilization. This book discusses the current status, and provides recommendations for the future, regarding regulations concerning the creation, licensing, use, implementation and governance of AGI.
Based on an international assessment of the issues and potential governance approaches for the transition from ANI of today to future forms of AGI by The Millennium Project, a global participatory think tank, the book explores how to manage this global transition. Section 1 shares the views of 55 AGI experts and thought leaders from the US, China, UK, Canada, EU, and Russia, including Elon Musk, Sam Altman and Bill Gates, on 22 critical questions. In Section 2, The Millennium Project futurist team analyzes these views to create a list of potential regulations and global governance systems or models for the safe emergence of AGI, rated and commented on by an international panel of futurists, diplomats, international lawyers, philosophers, scientists and other experts from 47 countries.
This book broadens and deepens the current conversations about future AI, educating the public as well as those who make decisions and advise others about potential artificial intelligence regulations.
- Provides international assessments of specific regulations, guardrails, and global governance models
- Includes contributions from notable experts
- Compiles the latest thinking on national and global AGI governance from 300 AGI expert
The international conversation on AI is often terribly confusing, since different kinds of AI become fused under the one overarching term. There are three kinds of AI: narrow, general, and super AI, with some grey areas in between. It is very important to clarify these distinctions because each type has very different impacts and vastly different national and international regulatory requirements.
Without national and international regulation, it is inevitable that humanity will lose control of what will become a non-biological intelligence beyond our understanding, awareness, and control. Half of AI researchers surveyed by the Center for Human Technology believe there is a 10 percent or greater chance that humans will go extinct from their inability to control AI. But, if managed well, artificial general intelligence could usher in great advances in the human condition—from medicine, education, longevity, and turning around global warming to advances in scientific understanding of reality and creating a more peaceful world. So, what should policymakers know and do now to achieve the extraordinary benefits while avoiding catastrophic, if not existential, risks?
About
More Information
The report has been formally submitted to the President of the General Assembly, and discussions are underway regarding its implementation. While official UN briefings are expected in the coming months, the report is being shared now to encourage early engagement.
The report warns that AGI—defined as AI systems capable of matching or surpassing human intelligence across a broad range of tasks—could emerge within this decade. While AGI holds the potential to accelerate scientific discovery, advance public health, and help achieve the Sustainable Development Goals, it also poses unprecedented risks, including autonomous harmful actions and threats to global security.
The panel calls for urgent, coordinated international action under UN leadership, including a dedicated UN General Assembly session on AGI, the establishment of a global observatory, a certification system for safe AGI, and consideration of a UN Convention and international agency to ensure responsible development and equitable distribution of AGI benefits.
We invite you to read the report, share it with your press representatives, and circulate it within your Ministries of Foreign Affairs. You are also welcome to write about it or bring public attention to its findings and recommendations.
This is an important step toward shaping international cooperation on one of the most transformative technologies of our time.
Source: The Millennium Project
UN CPGA Report
AGI UNCPGA Report
UNCPGA Report Authors
Source: UNCPGA Report
Jerome Glenn (USA), Chair
IEEE Organizational Governance of AI Voting Member; author of the European Union’s
Horizon 2025-27 paper on AGI: Issues and Opportunities; CEO of The Millennium Project
and author of its International Governance Issues of the Transition from Artificial Narrow
Intelligence to AGI, Requirements for Global Governance of AGI, and Work/Technology 2050:
Scenarios and Actions. Author of Future Mind: Artificial Intelligence (1989).
Renan Araujo (Brazil)
Research Manager at the Institute for AI Policy and Strategy focusing on risk management
related to AGI development. He is currently leading IAPS’ work on international AGI
governance. He is an Oxford China Policy Lab Fellow, lawyer, co-founder of the Condor
Initiative (which connects Brazilian students with world-class opportunities to shape AI
research and policy) and worked on AI governance programs at Rethink Priorities and the
Institute for Law and AI.
Yoshua Bengio (Canada)
Professor of computer science at Université de Montréal; Chair, Safety and Secure AI
Advisory Group for the Canadian government; Chair of the International AI Safety Report
mandated by 30 countries plus UN, OECD and EU. Scientific director of Mila, the Quebec
AI Institute; Member of the UN Secretary-General’s Scientific Advisory Board for
Breakthroughs in Science and Technology; recipient of the Turing Award and currently the
most cited computer scientist worldwide.
Joon Ho Kwak (Republic of Korea)
Technical advisor of the Korean AI Safety Institute; played a leading role in the
development of the OECD’s Guidelines for Developing Trustworthy AI; participant in the
G7 Hiroshima Process, Paris AI Action Summit preparations, Korea-US AI Working Group,
and member of the Korean delegation to the International AI Safety Institutes Networks.
Lan Xue (China)
Chair of the National Expert Committee on AI Governance; Dean of the Institute for AI
International Governance at Tsinghua University; member of the Advisory Group of STI
Directorate of the OECD; advisor for the China AI Safety Institute; Co-Chair of the
Leadership Council of the UN Sustainable Development Solution Network (UNSDSN);
recipient of the Fudan Distinguished Contribution Award for Management Science and the
Distinguished Contribution Award from the Chinese Association of Science of Science and
S&T Policy.
Stuart Russell (UK/USA)
Distinguished Professor of Computer Science and Director, Center for Human-Compatible
AI, University of California, Berkeley; author of Artificial Intelligence: A Modern Approach,
the standard AI textbook used in 1,500 universities across 135 countries and cited over
74,000 times; Co-Chair of the OECD expert group on AI futures and the World Economic
Forum’s Global AI Council.
Jaan Tallinn (Estonia)
Member of the UN AI Advisory Body; served on the EC’s High-Level Expert Group on AI;
Co-Founder of the University of Cambridge’s Center for the Study of Existential Risk and
the Future of Life Institute (both institutions are leaders in AGI issues); Board Member of
the Center for AI Safety; Estonian investor in AGI safety; Founding engineer of Skype and
FastTrack/Kazaa; and a founding investor director of DeepMind Google.
Mariana Todorova (Bulgaria)
Bulgarian representative in UNESCO’s Intergovernmental Group on AI Ethical Frameworks;
leading spokesperson on AGI in Bulgarian media; internationally recognized author and
lecturer on AI’s and AGI’s ethical and technological dimensions; former Member of
Parliament and advisor to the President of the Republic of Bulgaria.
Jose Jaine Villalobos (Costa Rica)
Multilateral Governance Lead at the Future of Life Institute; Senior Research Associate,
Centre for International Governance Innovation; Research Affiliate, Oxford Martin AI
Governance Initiative; Research Affiliate, Institute for Law & AI; PhD in international law;
and is co-author of leading books and articles on international AI governance.
Book
Global Governance of the Transition to Artificial General Intelligence
Source: De Gruyter Brill
While today’s Artificial Narrow Intelligence (ANI) tools have limited purposes like diagnosing illness or driving a car, if managed well, Artificial General Intelligence (AGI), could usher in great advances in human condition encompassing the fields of medicine, education, longevity, turning around global warming, scientific advancements, and creating a more peaceful world. However, if left unbridled, AGI also has the potential to end human civilization. This book discusses the current status, and provides recommendations for the future, regarding regulations concerning the creation, licensing, use, implementation and governance of AGI.
Based on an international assessment of the issues and potential governance approaches for the transition from ANI of today to future forms of AGI by The Millennium Project, a global participatory think tank, the book explores how to manage this global transition. Section 1 shares the views of 55 AGI experts and thought leaders from the US, China, UK, Canada, EU, and Russia, including Elon Musk, Sam Altman and Bill Gates, on 22 critical questions. In Section 2, The Millennium Project futurist team analyzes these views to create a list of potential regulations and global governance systems or models for the safe emergence of AGI, rated and commented on by an international panel of futurists, diplomats, international lawyers, philosophers, scientists and other experts from 47 countries.
This book broadens and deepens the current conversations about future AI, educating the public as well as those who make decisions and advise others about potential artificial intelligence regulations.
- Provides international assessments of specific regulations, guardrails, and global governance models
- Includes contributions from notable experts
- Compiles the latest thinking on national and global AGI governance from 300 AGI expert