The Millennium Project is a global participatory think tank established in 1996 under the American Council for the United Nations University. We became an independent non-profit in 2009 and we now have 72 Nodes (a group of institutions and individuals that connect local and global perspectives) around the world.
Purpose: Improve humanity’s prospects for building a better future.
Mission: Improve thinking about the future and make that thinking available through a variety of media for feedback to accumulate wisdom about the future for better decisions today.
Vision: A global foresight network of Nodes, information, and software, building a global collective intelligence system recognized for its ability to improve prospects for humanity. A think tank on behalf of humanity, not on behalf of a government, or an issue, or an ideology, but on behalf of building a better future for all of us.
STOCKHOLM, Sept 19 (Reuters) – An artificial-intelligence advisory body at the United Nations on Thursday released its final report proposing seven recommendations to address AI-related risks and gaps in governance.
The U.N. last year created a 39-member advisory body to address issues in the international governance of AI. The recommendations will be discussed during a U.N. summit held in September.
The Reuters Daily Briefing newsletter provides all the news you need to start your day. Sign up here.
The advisory body called for the establishment of a panel to provide impartial and reliable scientific knowledge about AI and address information asymmetries between AI labs and the rest of the world.
Vinton G. Cerf has served as vice president and chief Internet evangelist for Google since October 2005. In this role, he contributes to global policy development and continued standardization and spread of the Internet. He is also an active public face for Google in the Internet world.
From 1994 to 2005, Cerf served as the senior vice president of Technology Strategy for MCI. In this role, Cerf was responsible for helping to guide corporate strategy development from the technical perspective. Previously, Cerf served as MCI’s senior vice president of Architecture and Technology, leading a team of architects and engineers to design advanced networking frameworks including Internet-based solutions for delivering a combination of data, information, voice and video services for business and consumer use.
As you may have heard, the U.S. House of Representatives last week passed the ‘One, Big, Beautiful Bill’, a budget reconciliation bill, which is now with the Senate. One particularly controversial inclusion is a 10-year moratorium on states passing their own AI legislation.
A strong bipartisan coalition has come out against this provision, referred to as preemption. For example, in a recent letter, a group of 40 state attorneys general from both major parties urged Congress to reject the moratorium, warning it “would directly harm consumers, deprive them of rights currently held in many states, and prevent State AGs from fulfilling their mandate to protect consumers”.
Additionally, a new poll by Common Sense Media finds widespread concerns about the potential negative effects of AI, especially on youth, and that 73% of voters across party lines want both states and the federal government to regulate AI. The proposed federal ban itself is unpopular: 59% of voters oppose it, and 52% say it makes them less likely to support the budget bill entirely.
AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
Intelligence explosion, singularity, fast takeoff… these are a few of the terms given to the surpassing of human intelligence by machine intelligence, likely to be one of the most consequential – and unpredictable – events in our history.
For many decades, scientists have predicted that artificial intelligence will eventually enter a phase of recursive self-improvement, giving rise to systems beyond human comprehension, and a period of extremely rapid technological growth. The product of an intelligence explosion would be not just Artificial General Intelligence (AGI) – a system about as capable as a human across a wide range of domains – but a superintelligence, a system that far surpasses our cognitive abilities.
Speculation is now growing within the tech industry that an intelligence explosion may be just around the corner. Sam Altman, CEO of OpenAI, kicked off the new year with a blog post entitled Reflections, in which he claimed: “We are now confident we know how to build AGI as we have traditionally understood it… We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word”. A researcher at that same company referred to controlling superintelligence as a “short term research agenda”. Another’s antidote to online hype surrounding recent AI breakthroughs was far from an assurance that the singularity is many years or decades away: “We have not yet achieved superintelligence”.
Introducing our new AI awareness companion; notes from the AI Action Summit and IASEAI; a new short film on AI replacing human labour; and more!
Today, we’re thrilled to launch ‘PERCEY Made Me‘: an innovative AI awareness campaign, with an interactive web app at its centre. It’s an AI-based chatbot built to engage people and spread awareness of AI’s current abilities to persuade and influence people, in just a few minutes.
Voiced by the legendary Stephen Fry, PERCEY is your personal guide to navigating the rapidly evolving world of artificial intelligence. With AI threatening to reshape our lives at lightning speed, PERCEY offers a unique, approachable way to:
Assess your personal AI risk awareness
Challenge and explore your assumptions about AI and AGI
Gain insights into AI’s potential impact on your future
Whether you’re a tech enthusiast, cautious observer, or simply curious about the AI landscape, PERCEY provides a refreshing, humour-infused approach to help counter the reckless narratives Big Tech companies are pushing.
Special: Defeating AI Defenses: Podcast Future of Life Institute , Nicholas Carlini and Nathan Labenz – March 21, 2025
In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research.
00:00 Nicholas Carlini’s contributions to cybersecurity
08:19 Understanding attack strategies
29:39 High-dimensional spaces and attack intuitions
51:00 Challenges in open-source model safety
01:00:11 Unlearning and fact editing in models
01:10:55 Adversarial examples and human robustness
Artificial intelligence that is smarter than humans being built like “agents” could prove dangerous amid lack of clarity over controls, two of of the world’s most prominent AI scientists told CNBC.
Yoshua Bengio and Max Tegmark warned of the dangers of uncontrollable AI.
For Tegmark, the key lies in so-called “tool AI” — systems that are created for a specific, narrowly-defined purpose, without serving as agents.
Artificial general intelligence built like “agents” could prove dangerous as its creators might lose control of the system, two of of the world’s most prominent AI scientists told CNBC.
In the latest episode of CNBC’s “Beyond The Valley” podcast released on Tuesday, Max Tegmark, a professor at the Massachusetts Institute of Technology and the President of the Future of Life Institute, and Yoshua Bengio, dubbed one of the “godfathers of AI” and a professor at the Université de Montréal, spoke about their concerns about artificial general intelligence, or AGI. The term broadly refers to AI systems that are smarter than humans.
Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance of causing human extinction, just as a similar asteroid exterminated the dinosaurs about 66 million years ago. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect humanity to shift into high gear with a deflection mission to steer it in a safer direction.
Sadly, I now feel that we’re living the movie “Don’t look up” for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar.
Introduction We must not build AI to replace humans. Humanity is on the brink of developing artificial general intelligence that exceeds our own. It’s time to close the gates on AGI and superintelligence… before we lose control of our future.
Go to the website for interactive summary, video, and essay or go to this section in this post.
Vint Cerf, an American computer scientist, is widely regarded as one of the founders of the Internet. Since October 2005, he has served as Vice President and Chief Internet Evangelist at Google. Recently, he sat down with Google DeepMind’s Public Policy Director Nicklas Lundblad, for a conversation on AI, its relationship with the Internet, and how both may evolve. The interview took place with Vint in his office in Reston, Virginia, and Nicklas in the mountains of northern Sweden. Behind Vint was an image of the interplanetary Internet system – a fitting backdrop that soon found its way into the discussion.
I. The relationship between the Internet and AI
II. Hallucinations, understanding and world models
III. Density & connectivity in human vs silicon brains
From Martin LaMonica, former technology journalist and science editor for The Conversation, currently Director of Editorial Projects and Newsletters:
Dear reader, We at The Conversation are keen to know what questions you have about AI and types of stories you want to read.
To tell us, please fill out this very short questionnaire. I’ll share your responses (no names or emails will be attached) with the editors to help guide our coverage going forward.
The Conversation AI is different than most newsletters on artificial intelligence. We will, of course, cover how the technology is evolving and its many applications.
But our editors and expert authors do more – they look broadly at the impact this powerful technology is having on society, whether it’s new ethical and regulatory questions, or changes to the workplace. Also, our academic writers approach this subject from a variety of disciplines and from universities around the world, bringing you a global perspective on this hot issue.
Taking you inside the AI revolution, and delivering scoops and insights on the technologies reshaping our lives.
It is co-authored by Axios chief technology correspondent Ina Fried and global technology correspondent Ryan Heath.
Why it matters: AI’s transformations of industry and society are poised to rival those triggered by the arrival of the internet two decades ago.
The Axios tech team has been leading coverage of AI and the revolutionary potential it brings, providing scoops and valuable insights on the latest updates and innovations from industry leaders.
Vinton G. Cerf has served as vice president and chief Internet evangelist for Google since October 2005. In this role, he contributes to global policy development and continued standardization and spread of the Internet. He is also an active public face for Google in the Internet world.
From 1994 to 2005, Cerf served as the senior vice president of Technology Strategy for MCI. In this role, Cerf was responsible for helping to guide corporate strategy development from the technical perspective. Previously, Cerf served as MCI’s senior vice president of Architecture and Technology, leading a team of architects and engineers to design advanced networking frameworks including Internet-based solutions for delivering a combination of data, information, voice and video services for business and consumer use.
Samuel Harris Altman (born April 22, 1985) is an American entrepreneur and investor best known as the chief executive officer of OpenAI since 2019 (he was briefly dismissed and reinstated in November 2023). He is also the chairman of clean energy companies Oklo Inc. and Helion Energy.
Altman is considered to be one of the leading figures of the AI boom. He dropped out of Stanford University after two years and founded Loopt, a mobile social networking service, raising more than $30 million in venture capital. In 2011, Altman joined Y Combinator, a startup accelerator, and was its president from 2014 to 2019. Altman’s net worth was estimated at $1.1 billion in January 2025.
[This website is a demonstration and is not currently affiliated with the Imagining The Digital Future Center nor Elon University.]
Imagining the Digital Future (ITDF) Center is an interdisciplinary research center focused on the human impact of accelerating digital change and the socio-technical challenges that lie ahead.
Imagining the Digital Future’s mission is to discover and broadly share a diverse range of opinions, ideas and original research about the likely evolution of digital change, informing important conversations and policy formation.
February 29, 2024, ITDF released its first new research report, “The Impact of Artificial Intelligence by 2040,” for which, in two separate studies, a large group of global digital life experts and the U.S. general public were asked to share their opinions on the likely future impact of AI. The global experts predicted that as these tools advance we will have to rethink what it means to be human and we must reinvent or replace major institutions in order to achieve the best possible future. The issues the Americans polled were most concerned about are the further erosion of personal privacy, their opportunities for employment, how these systems might change their relationships with others, and AI applications’ potential impact on basic human rights.
President Book & Lee Rainie discuss the AI2040 report Imagining the Digital Future Center – 29/02/2024 (12:59)
Elon University President Connie Book and Lee Rainie, director of the Imagining the Digital Future Center, discuss the Center’s first report focused on the impact of artificial intelligence by 2040.
Disclaimer: The content in this post and other posts in the proposed AI2 Nexus custom hub are from the AI2 Nexus website WITHOUT ANY EDITS. George Mason University is a public university in Northern Virginia, USA.
George Mason University is driving rapid AI adoption and advancements across the Commonwealth.
As the largest and most diverse university in Virginia, just outside Washington, D.C., George Mason University is leading the future of inclusive artificial intelligence (AI) and developing responsible models for AI research, education, workforce development, and community engagement within a modern university.
As AI reshapes industries, George Mason combines fearless ideas that harness the technology’s boundless potential to address the world’s grand challenges, while creating guardrails based on informed, transdisciplinary research around ethical governance, regulatory oversight, and social impact.
Led by the university’s inaugural vice president and chief artificial intelligence officer (CAIO) Amarda Shehu with an AI Visioning Task Force, George Mason is reimagining operational excellence in every facet of the university.
Welcome, Amarda Shehu | Chief Artificial Intelligence Officer George Mason University – 04/09/2024 (01:18)
https://www.youtube.com/watch?v=5TLBzTpBGwA
George Mason University has named Associate Vice President for Research for the Institute for Digital Innovation (IDIA) Amarda Shehu as the university’s inaugural vice president and chief artificial intelligence officer (CAIO). In this role, Shehu will lead the strategy and implementation of AI across research, academics, and partnerships for the university, maximizing opportunity and adoption in addressing the world’s grand challenges while leading on ethical considerations, governance, and risk mitigation
Global Governance of the Transition to Artificial General Intelligence: Issues and Requirements Authored and Edited by Jerome Clayton Glenn
While today’s Artificial Narrow Intelligence (ANI) tools have limited purposes like diagnosing illness or driving a car, if managed well, Artificial General Intelligence (AGI), could usher in great advances in human condition encompassing the fields of medicine, education, longevity, turning around global warming, scientific advancements, and creating a more peaceful world. However, if left unbridled, AGI also has the potential to end human civilization. This book discusses the current status, and provides recommendations for the future, regarding regulations concerning the creation, licensing, use, implementation and governance of AGI.
Based on an international assessment of the issues and potential governance approaches for the transition from ANI of today to future forms of AGI by The Millennium Project, a global participatory think tank, the book explores how to manage this global transition. Section 1 shares the views of 55 AGI experts and thought leaders from the US, China, UK, Canada, EU, and Russia, including Elon Musk, Sam Altman and Bill Gates, on 22 critical questions. In Section 2, The Millennium Project futurist team analyzes these views to create a list of potential regulations and global governance systems or models for the safe emergence of AGI, rated and commented on by an international panel of futurists, diplomats, international lawyers, philosophers, scientists and other experts from 47 countries.
This book broadens and deepens the current conversations about future AI, educating the public as well as those who make decisions and advise others about potential artificial intelligence regulations.
To participate in a forum discussions, give a recommendation, and/or ask the author questions, go to the onAir Post.
The international conversation on AI is often terribly confusing, since different kinds of AI become fused under the one overarching term. There are three kinds of AI: narrow, general, and super AI, with some grey areas in between. It is very important to clarify these distinctions because each type has very different impacts and vastly different national and international regulatory requirements.
Without national and international regulation, it is inevitable that humanity will lose control of what will become a non-biological intelligence beyond our understanding, awareness, and control. Half of AI researchers surveyed by the Center for Human Technology believe there is a 10 percent or greater chance that humans will go extinct from their inability to control AI. But, if managed well, artificial general intelligence could usher in great advances in the human condition—from medicine, education, longevity, and turning around global warming to advances in scientific understanding of reality and creating a more peaceful world. So, what should policymakers know and do now to achieve the extraordinary benefits while avoiding catastrophic, if not existential, risks?
In today’s complex and uncertain world, accurate predictions are a fiction. Strategic Foresight helps you prepare for different futures—those that are possible, plausible, and preferred.
Our Applied Foresight Foundation Program empowers you to:
Analyze driving forces and trends of change.
Develop strategies that align with your mission.
Find solutions to shape your organization’s future.
Be a leader in strategic foresight; earn your credentials! With rapid tech advancements and global complexities, being a leader in strategic foresight will provide you and your department an advantage for preferred future outcomes.
Our executive program will immerse you in strategic foresight concepts and methodology, including trend analysis, scenario planning, systems thinking, risk assessment, and futures-thinking for shaping policy.
If you would like to learn more, please submit your information and we will follow up with you shortly.
System for people to think together about the future — 72 Nodes (groups of individuals & institutions) connecting global and local perspectives — Real-Time Delphi for rapid international assessment and feedback
Educational contributions — Over 400 Interns (since our founding in 1996) trained from over 30 countries — Approximately 1,000 universities use The Millennium Project materials — Millennium Awards that has involved over a thousand students from around the world
Inclusive and participatory system to measure global progress/regress — State of the Future Index (SOFI) – Global and National Indexes
Largest collection of methods to explore the future — 37 Methods, 39 Chapters, 1,300 pages, internationally peer-reviewed (Futures Research Methodology 3.0)
Previous Futures Research Studies:
African Futures Scenarios 2025, and UNDP workshop at the UN (1994)
Millennium Project Feasibility Study final report (1995)
Global Issues/Strategies four-round Global Lookout (Delphi) study (1996)
Lessons of History (1997)
Global Opportunities and Strategies Delphi (1997)
Definitions of Environmental Security (1997)
Futures Research in Decisionmaking (and checklist) (1998-99)
Jerome C. Glenn is a globally recognized futurist and co-founder of the Millennium Project, an international think tank focused on foresight and global challenges. With decades of experience in futures research, Glenn specializes in exploring emerging technologies, especially artificial general intelligence (AGI), and their societal impacts. His work emphasizes the importance of anticipatory governance and global collaboration to navigate existential risks and harness the transformative potential of AI.
As Executive Director of the Millennium Project, Glenn leads a network of futurists and researchers worldwide dedicated to participatory thinking and addressing complex global issues through scenario planning and foresight.
Future-Proofing Humanity | Deep Interview with Jerome Glenn
(17:42) By: SingularityNET
Join Jerome Glenn, Executive Director of the Millennium Project, as he dives into the intricacies of managing Artificial General Intelligence (AGI) before its full realization. Learn about the project’s global participatory approach, including inputs from 55 world leaders, the creation of multi-stakeholder governance bodies, and continuous auditing systems. Glenn also discusses the importance of international collaboration and drafting regulations to ensure safe and effective AGI governance, emphasizing the need for coordinated efforts across nations and organizations.
00:00 Introduction to Jerome Glenn and The Millennium Project
Jerome C. Glenn, Founder and CEO of The Millennium Project, explains why we need to study the transition from NARROW to GENERAL Artificial Intelligence now in order to get the initial conditions right in this short video made by The Millennium Project. For information and support to the study,
Artificial General Intelligence and the Future of Ethics
When artificial intelligence exceeds human thinking in all categories of reasoning and understanding, what conclusions will it reach about the future of ethics? Will such systems – AGIs – take greater care of us humans, than the regard we show to rodents? To what extent can design choices made by human developers influence the decisions that AGIs will take? Or is any such discussion premature or misguided, given apparently more pressing problems facing human civilisation as 2023 approaches?
This London Futurists webinar took place on 17th December 2002 and featured the ideas of Daniel Faggella, the founder and CEO of Emerj Artificial Intelligence Research. Daniel has researched and written extensively on topics such as: *) A forthcoming “moral singularity” *) Scenarios for the emergence of AGI *) Why ideas of “friendly AI” are fraught with difficulty *) Possible trajectories for posthumans in the wake of advanced AI The event also featured comments and feedback from *) Bronwyn Williams, Foresight Lead, Flux Trends *) Rohit Talwar, CEO of Fast Future It was introduced and moderated by David Wood, Chair of London Futurists.
AGI Scenarios
The Millennium Project invited all those studying or working on the future issues of global governance of Artificial General Intelligence – AGI to share their judgements on the elements necessary for safe and productive global governance of AGI in our new online Real-Time Delphi
Phase 1 of the AGI study collected the views of 55 AGI leaders in the US, China, UK, the European Union, Canada, and Russia to the 22 questions below (the list of leaders follows the questions). Phase 1 research was financially supported by the Dubai Future Foundation and the Future of Life Institute:
Phase 2 is a Real-time Delphi Study that assessed 40 potential regulations for developers, governments, UN Multi-Stakeholder hybrid (Human-AI) organization, and users for trusted global and national governance of AGI. The RTDelphi is now closed and report is being prepared.
Jerome C. Glenn is the CEO of The Millennium Project, Chairman of the AGI Panel of the UN Council of Presidents of the General Assembly, and author of the forthcoming book Global Governance of the Transition to Artificial General Intelligence (2025).
The international conversation on AI is often terribly confusing, since different kinds of AI become fused under the one overarching term. There are three kinds of AI: narrow, general, and super AI, with some grey areas in between. It is very important to clarify these distinctions because each type has very different impacts and vastly different national and international regulatory requirements.
Without national and international regulation, it is inevitable that humanity will lose control of what will become a non-biological intelligence beyond our understanding, awareness, and control. Half of AI researchers surveyed by the Center for Human Technology believe there is a 10 percent or greater chance that humans will go extinct from their inability to control AI. But, if managed well, artificial general intelligence could usher in great advances in the human condition—from medicine, education, longevity, and turning around global warming to advances in scientific understanding of reality and creating a more peaceful world. So, what should policymakers know and do now to achieve the extraordinary benefits while avoiding catastrophic, if not existential, risks? First of all, it is important to understand the different kinds of AI.
A creative illustration of AI’s evolution, a process that is certain to escape human control | Source: ChatGPT
Artificial narrow intelligence (ANI) ranges from tools with limited purposes, such as diagnosing cancer or driving a car, to the rapidly advancing generative AI that answers many questions, generates software code, pictures, movies, and music, and summarizes reports. In the grey area between narrow and general are AI agents and general-purpose AI becoming popular in 2025. For example, AI agents can break down a question into a series of logical steps. Then, after reviewing the user’s prior behavior, the AI agent can adjust the answer to the user’s style. If the answers or actions do not completely match the requirements, then the AI agent can ask the user for more information and feedback as necessary. After the completed task, the interactions can be updated in the AI’s knowledge base to better serve the user in the future.
Artificial general intelligence (AGI) does not exist at the time of this writing. Many AGI experts believe it could be achieved or emerge as an autonomous system within five years. It would be able to learn, edit its code to become recursively more intelligent, conduct abstract reasoning, and act autonomously to address many novel problems with novel solutions similar to or beyond human abilities. For example, given an objective, it can query data sources, call humans on the phone, and rewrite its own code to create capabilities to achieve the objective as necessary. Although some expect it will be a non-biological sentient, self-conscious being, it will at least act as if it were, and humans will treat it as such.
Artificial super intelligence (ASI) will be far more intelligent than AGI and likely to be more intelligent than all of humanity combined. It would set its own goals and act independently from human control and in ways that are beyond human understanding and awareness. This is what Bill Gates, Elon Musk, and the late Stephen Hawking have warned us about and what some science fiction has illustrated for years. Humanity has never faced a greater intelligence than its own.
In the past, technological risks were primarily caused by human misuse. AGI is fundamentally different. Although it poses risks stemming from human misuse, it also poses potential threats caused by AGI. As a result, in addition to the control of human misuse of AI, regulations also have to be created for the independent action of AGI. Without regulations for the transition to AGI, we are at the mercy of future non-biological intelligent species.
Today, there is a competitive rush to develop AGI without adequate safety measures. As Russian President Vladimir Putin famously warned about AI development, “the one who becomes the leader in this sphere will be the ruler of the world.”
So far, there is nothing standing in the way to stop an increasing concentration of power, the likes of which the world has never known.
Nations and corporations are prioritizing speed over security, undermining potential national governing frameworks, and making safety protocols secondary to economic or military advantage. There is also the view that Company A might feel a moral responsibility to get to AGI first to prevent Company B, because Company A believes they are more responsible than Company B. If Company B, C, and D have the same beliefs as Company A, then each company believes it has a moral responsibility to accelerate their race to achieve AGI first. As a result, all might cut corners along the way to become the first to achieve this goal, leading to dangerous situations. The same applies to the national military development of AGI.
Since many forms of AGI from governments and corporations are expected to emerge before the end of this decade—and since establishing national and international governance systems will take years—it is urgent to initiate the necessary procedures to prevent the following outcomes of unregulated AGI, documented for the UN Council of Presidents of the General Assembly:
Irreversible Consequences. Once AGI is achieved, its impact may be irreversible. With many frontier forms of AI already showing deceptive and self-preserving behavior, and the push toward more autonomous, interacting, self-improving AIs integrated with infrastructures, the impacts and trajectory of AGI can plausibly end up being uncontrollable. If that happens, there may be no way to return to a state of reliable human oversight. Proactive governance is essential to ensure that AGI will not cross red lines, leading to uncontrollable systems with no clear way to return to human control.
Weapons of mass destruction. AGI could enable some states and malicious non-state actors to build chemical, biological, radiological, and nuclear weapons. Moreover, large AGI-controlled swarms of lethal autonomous weapons could themselves constitute a new category of WMDs.
Critical infrastructure vulnerabilities. Critical national systems (e.g., energy grids, financial systems, transportation networks, communication infrastructure, and healthcare systems) could be subject to powerful cyberattacks launched by or with the aid of AGI. Without national deterrence and international coordination, malicious non-state actors—from terrorists to transnational organized crime—could conduct attacks at a large scale.
Power concentration, global inequality, and instability. Uncontrolled AGI development and usage could exacerbate wealth and power disparities on an unprecedented scale. If AGI remains in the hands of few nations, corporations, or elite groups, it could entrench economic dominance and create global monopolies over intelligence, innovation, and industrial production. This could lead to massive unemployment, widespread disempowerment affecting legal underpinnings, loss of privacy, and collapse of trust in institutions, scientific knowledge, and governance. It could undermine democratic institutions through persuasion, manipulation, and AI-generated propaganda, and heighten geopolitical instability in ways that increase systemic vulnerabilities. A lack of coordination could result in conflicts over AGI resources, capabilities, or control, potentially escalating into warfare. AGI will stress existing legal frameworks: many new and complex issues of intellectual property, liability, human rights, and sovereignty could overwhelm domestic and international legal systems.
Existential risks. AGI could be misused to create mass harm or developed in ways that are misaligned with human values. Furthermore, it could even act autonomously beyond human oversight, evolving its own objectives according to self-preservation goals already observed in current frontier AIs. AGI might also seek power as a means to ensure it can execute whatever objectives it determines, regardless of human intervention. National governments, leading experts, and the companies developing AGI have all stated that these trends could lead to scenarios in which AGI systems seek to overpower humans. These are not far-fetched science fiction hypotheticals about the distant future—many leading experts fear that these risks could all materialize within this decade, and their precursors are already occurring. Moreover, leading AI developers have thus far had no viable proposals for preventing these risks.
Loss of extraordinary future benefits for all of humanity. Properly managed AGI promises improvements in all fields, for all peoples—from personalized medicine, curing cancer, and cell regeneration, to individualized learning systems, ending poverty, addressing climate change, and accelerating scientific discoveries with unimaginable benefits. Ensuring such a magnificent future for all requires global governance, which begins with improved global awareness of both the risks and benefits. The United Nations is critical to this mission.
Although we may not be able to directly control how ASI emerges and acts, we can create national and international regulations for how AGI is created, licensed, used, and governed before it accelerates its learning and emerges into ASI beyond our control. We can explore how to manage the transition from ANI to AGI. How well we manage that transition is likely to also shape the transition from AGI to ASI.
We can think of ANI as our young children, whom we control—what they wear, when they sleep, and what they eat. We can think of AGI as our teenagers, over whom we have some control, which does not include what they wear or eat or when they sleep.
And we can think of ASI as an adult, over whom we no longer have any control. Parents know that if they want to shape their children into good, moral adults, then they have to focus on the transition from childhood to adolescence. Similarly, if we want to shape ASI, then we have to focus on the transition from ANI to AGI. And that time is now.
The greatest research and development investments in history are now focused on creating AGI.
Without national and international regulations for AGI, many AGIs from many governments and corporations could continually re-write their own codes, interact with each other, and give birth to many new forms of artificial superintelligences beyond our control, understanding, and awareness.
Governing AGI is the most complex, difficult management problem humanity has ever faced. To help understand how to accomplish safer development of AGI, The Millennium Project, a global participatory think tank, conducted an international assessment of the issues and potential governance approaches for the transition from today’s ANI to future forms of AGI. The study began by posing a list of 22 AGI-critical questions to 55 AGI experts and thought leaders from the United States, China, United Kingdom, Canada, EU, and Russia. Drawing on their answers, a list of potential regulations and global governance models for the safe emergence and governance of AGI was created. These, in turn, were rated by an international panel of 299 futurists, diplomats, international lawyers, philosophers, scientists, and other experts from 47 countries. The results are available in State of the Future 20.0 from www.millennium-project.org.
In addition to the need for governments to create national licensing systems for AGI, the United Nations has to provide international coordination, critical for the safe development and use of AGI for the benefit of all humanity. The UN General Assembly has adopted two resolutions on AI: 1) the U.S.-initiated resolution “Seizing the opportunities of safe, secure, and trustworthy artificial intelligence systems for sustainable development” (A/78/L.49); and 2) the China-initiated resolution “Enhancing international cooperation on capacity-building of artificial intelligence” (A/78/L.86). These are both good beginnings but do not address managing AGI. The UN Pact for the Future, the Global Digital Compact, and UNESCO’s Recommendation on the Ethics of AI call for international cooperation to develop beneficial AI for all humanity, while proactively managing global risks. These initiatives have brought world attention to current forms of AI, but not AGI. To increase world political leaders’ awareness of the coming issues of AGI, a UN General Assembly special session specifically on AGI should be conducted as soon as possible. This will help raise awareness and educate world leaders on the risks and benefits of AGI and why national and global actions are urgently needed.
The following items should be considered during a UN General Assembly session specifically on AGI:
A global AGI observatory is needed to track progress in AGI-relevant research and development and provide early warnings on AI security to UN member states. This observatory should leverage the expertise of other UN efforts, such as the Independent International Scientific Panel on AI, created by the Global Digital Compact and the UNESCO Readiness Assessment Methodology.
An international system of best practices and certification for secure and trustworthy AGI is needed to identify the most effective strategies and provide certification for AGI security, development, and usage. Verification of AGI alignment with human values, controlled and non-deceptive behavior, and secure development is essential for international trust.
A UN Framework Convention on AGI is needed to establish shared objectives and flexible protocols to manage AGI risks and ensure equitable global benefit distribution. It should define clear risk tiers requiring proportionate international action, from standard-setting and licensing regimes to joint research facilities for higher-risk AGI, and red lines or tripwires on AGI development. A UN Convention would provide the adaptable institutional foundation essential for globally legitimate, inclusive, and effective AGI governance, minimizing global risks and maximizing global prosperity from AGI.
Another necessary step would be to conduct a feasibility study on a UN AGI agency. Given the breadth of measures required to prepare for AGI and the urgency of the issue, steps are needed to investigate the feasibility of a UN agency on AGI, ideally in an expedited process. Something like the International Atomic Energy Agency (IAEA) has been suggested, understanding that AGI governance is far more complex than nuclear energy; and hence, such an agency will require unique considerations in such a feasibility study. Uranium cannot re-write its own atomic code, it is not smarter than humans, and we understand how nuclear reactions occur. Hence, management of atomic energy is much simpler than managing AGI.
Some have argued that the UN and national AI governance is premature and that it would stop innovations necessary to bring great benefits to humanity. They argue that it would be premature to call for establishing new UN governance mechanisms without a clearer understanding and consensus on where there may be gaps in the ability of existing UN agencies to address AI; hence, any proposals for new processes, panels, funds, partnerships, and/or mechanisms are premature. This is short-sighted.
National AGI licensing systems and a UN multi-stakeholder AGI agency might take years to create and implement. In the meantime, there is nothing stopping innovations and the great AGI race. If we approach establishing national and international governance of AGI in a business-as-usual fashion, then it is possible that many future forms of AGI and ASI will be permeating the Internet, making future attempts at regulations irrelevant.
The coming dangers of global warming have been known for decades, yet there is still no international system to turn around this looming disaster. It takes years to design, accept, and implement international agreements. Since global governance of AGI is so complex and difficult to achieve, the sooner we start working on it, the better.
Eric Schmidt, former CEO of Google, has said that the “San Francisco Consensus” is that AGI will be achieved in three to five years. Elon Musk, who normally opposes government regulation, has said future AI is different and has to be regulated. He points out that we don’t let people go to a grocery store and buy a nuclear weapon. For over ten years, Musk has advocated for national and international regulations of future forms of AI. If national licensing systems and a UN AGI agency have to be in place before AGI is released on the Internet, then political leadership will have to act with expediency never before witnessed. This cannot be a business-as-usual effort. Geoffrey Hinton, one of the fathers of AI, has said that such regulation may be impossible, but we have to try. During the Cold War, it was widely believed that nuclear World War III was inevitable and impossible to prevent. The shared fear of an out-of-control nuclear arms race led to agreements to manage it. Similarly, the shared fear of an out-of-control AGI race should lead to agreements capable of managing that race.
State of the Future 2025
The State of the Future 20.0 is a 500-page whopper with a unique and extensive overview of Future Issues & Opportunities compiled by The Millennium Project.
It provides a broad, detailed, and readable look at the issues and opportunities on the future of humanity, and what we should know today to avoid the worst and achieve the best for the future of civilization. The Millennium Project, a global participatory think-tank, distilled countless research reports, insights from hundreds of futurists and related experts around the world, and 70 of its own futures research reports, to make this report of immense value. It offers an Executive Summary on the prospects for civilization.
The Executive Summary offers an overview of the entire book, representing a short report card on the future of humanity as a whole.