Millennium Project

The Millennium Project is a global participatory think tank established in 1996 under the American Council for the United Nations University. We became an independent non-profit in 2009 and we now have 72 Nodes (a group of institutions and individuals that connect local and global perspectives) around the world.

Purpose: Improve humanity’s prospects for building a better future.

Mission: Improve thinking about the future and make that thinking available
through a variety of media for feedback to accumulate wisdom about the future for better decisions today.

Vision: A global foresight network of Nodes, information, and software, building a global collective intelligence system recognized for its ability to improve prospects for humanity. A think tank on behalf of humanity, not on behalf of a government, or an issue, or an ideology, but on behalf of building a better future for all of us.

OnAir Post: Millennium Project

AI Policy Organizations

There are many types of organizational stakeholders and their leaders focused on AI Policy as well as individual podcasters, researchers, and authors.

OnAir Post: AI Policy Organizations

  • United Nations & AI Governance United Nations & AI Governance

  • UN AI Advisory Board UN AI Advisory Board

    UN advisory body makes seven recommendations for governing AI
    Reuters, Supantha MukherjeeSeptember 19, 2024
    STOCKHOLM, Sept 19 (Reuters) – An artificial-intelligence advisory body at the United Nations on Thursday released its final report proposing seven recommendations to address AI-related risks and gaps in governance.
    The U.N. last year created a 39-member advisory body to address issues in the international governance of AI. The recommendations will be discussed during a U.N. summit held in September.

    The Reuters Daily Briefing newsletter provides all the news you need to start your day. Sign up here.

    The advisory body called for the establishment of a panel to provide impartial and reliable scientific knowledge about AI and address information asymmetries between AI labs and the rest of the world.
  • Vint Cerf Vint Cerf

    Vinton G. Cerf has served as vice president and chief Internet evangelist for Google since October 2005. In this role, he contributes to global policy development and continued standardization and spread of the Internet. He is also an active public face for Google in the Internet world.

    From 1994 to 2005, Cerf served as the senior vice president of Technology Strategy for MCI. In this role, Cerf was responsible for helping to guide corporate strategy development from the technical perspective. Previously, Cerf served as MCI’s senior vice president of Architecture and Technology, leading a team of architects and engineers to design advanced networking frameworks including Internet-based solutions for delivering a combination of data, information, voice and video services for business and consumer use.

    Source: Internet Hall of Fame

    OnAir Post: Vint Cerf

  • Future of Life Institute Future of Life Institute

    A 10-Year Ban on State AI Laws?!

    As you may have heard, the U.S. House of Representatives last week passed the ‘One, Big, Beautiful Bill’, a budget reconciliation bill, which is now with the Senate. One particularly controversial inclusion is a 10-year moratorium on states passing their own AI legislation.

    A strong bipartisan coalition has come out against this provision, referred to as preemption. For example, in a recent letter, a group of 40 state attorneys general from both major parties urged Congress to reject the moratorium, warning it “would directly harm consumers, deprive them of rights currently held in many states, and prevent State AGs from fulfilling their mandate to protect consumers”.

    Additionally, a new poll by Common Sense Media finds widespread concerns about the potential negative effects of AI, especially on youth, and that 73% of voters across party lines want both states and the federal government to regulate AI. The proposed federal ban itself is unpopular: 59% of voters oppose it, and 52% say it makes them less likely to support the budget bill entirely.

    We’ll keep you posted on what happens next!

    Are we close to an intelligence explosion?
    Future of Life Institute , Sarah Hastings-WoodhouseMarch 21, 2025

    AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.

    Intelligence explosion, singularity, fast takeoff… these are a few of the terms given to the surpassing of human intelligence by machine intelligence, likely to be one of the most consequential – and unpredictable – events in our history.

    For many decades, scientists have predicted that artificial intelligence will eventually enter a phase of recursive self-improvement, giving rise to systems beyond human comprehension, and a period of extremely rapid technological growth. The product of an intelligence explosion would be not just Artificial General Intelligence (AGI) – a system about as capable as a human across a wide range of domains – but a superintelligence, a system that far surpasses our cognitive abilities.

    Speculation is now growing within the tech industry that an intelligence explosion may be just around the corner. Sam Altman, CEO of OpenAI, kicked off the new year with a blog post entitled Reflections, in which he claimed: “We are now confident we know how to build AGI as we have traditionally understood it… We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word”. A researcher at that same company referred to controlling superintelligence as a “short term research agenda”. Another’s antidote to online hype surrounding recent AI breakthroughs was far from an assurance that the singularity is many years or decades away: “We have not yet achieved superintelligence”.

    Future of Life Institute Newsletter: Meet PERCEY
    Future of Life Institute Media, Maggie MunroMarch 5, 2025

    Introducing our new AI awareness companion; notes from the AI Action Summit and IASEAI; a new short film on AI replacing human labour; and more!

    Today, we’re thrilled to launch ‘PERCEY Made Me‘: an innovative AI awareness campaign, with an interactive web app at its centre. It’s an AI-based chatbot built to engage people and spread awareness of AI’s current abilities to persuade and influence people, in just a few minutes.

    Voiced by the legendary Stephen Fry, PERCEY is your personal guide to navigating the rapidly evolving world of artificial intelligence. With AI threatening to reshape our lives at lightning speed, PERCEY offers a unique, approachable way to:

    • Assess your personal AI risk awareness
    • Challenge and explore your assumptions about AI and AGI
    • Gain insights into AI’s potential impact on your future

    Whether you’re a tech enthusiast, cautious observer, or simply curious about the AI landscape, PERCEY provides a refreshing, humour-infused approach to help counter the reckless narratives Big Tech companies are pushing.

    Chat with PERCEY now, and please share widely! You can find PERCEY on XBlueSky, and Instagram at @PERCEYMadeMe.

     

    i
    Special: Defeating AI Defenses: Podcast
    Future of Life Institute , Nicholas Carlini and Nathan LabenzMarch 21, 2025

    In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research.

    00:00 Nicholas Carlini’s contributions to cybersecurity

    08:19 Understanding attack strategies

    29:39 High-dimensional spaces and attack intuitions

    51:00 Challenges in open-source model safety

    01:00:11 Unlearning and fact editing in models

    01:10:55 Adversarial examples and human robustness

    01:37:03 Cryptography and AI robustness

    01:55:51 Scaling AI security research

  • Max Tegmark Max Tegmark

    Key Points
    • Artificial intelligence that is smarter than humans being built like “agents” could prove dangerous amid lack of clarity over controls, two of of the world’s most prominent AI scientists told CNBC.
    • Yoshua Bengio and Max Tegmark warned of the dangers of uncontrollable AI.
    • For Tegmark, the key lies in so-called “tool AI” — systems that are created for a specific, narrowly-defined purpose, without serving as agents.

    Artificial general intelligence built like “agents” could prove dangerous as its creators might lose control of the system, two of of the world’s most prominent AI scientists told CNBC.

    In the latest episode of CNBC’s “Beyond The Valley” podcast released on Tuesday, Max Tegmark, a professor at the Massachusetts Institute of Technology and the President of the Future of Life Institute, and Yoshua Bengio, dubbed one of the “godfathers of AI” and a professor at the Université de Montréal, spoke about their concerns about artificial general intelligence, or AGI. The term broadly refers to AI systems that are smarter than humans.

    The ‘Don’t Look Up’ Thinking That Could Doom Us With AI
    Time Magazine, Max TegmarkApril 25, 2023

    Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance of causing human extinction, just as a similar asteroid exterminated the dinosaurs about 66 million years ago. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect humanity to shift into high gear with a deflection mission to steer it in a safer direction.

    Sadly, I now feel that we’re living the movie “Don’t look up” for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar.

  • Anthony Aguirre Anthony Aguirre

    i
    New Website on "Keep The Future Human"
    Future of Life Institute Media, Anthony Aguirre

    Introduction
    We must not build AI to replace humans.
    Humanity is on the brink of developing artificial general intelligence that exceeds our own. It’s time to close the gates on AGI and superintelligence… before we lose control of our future.

    Go to the website for interactive summary, video, and essay or go to this section in this post.

  • Machine Intelligence Research Institute (MIRI) Machine Intelligence Research Institute (MIRI)

  • Eliezer Yudkowsky Eliezer Yudkowsky

  • Institute for AI Policy and Strategy (IAPS) Institute for AI Policy and Strategy (IAPS)

  • Future of Life Institute Media Future of Life Institute Media

  • Maggie Munro Maggie Munro

  • Google DeepMind policy team Google DeepMind policy team

    The Internet & AI: An interview with Vint Cerf
    AI Policy PerspectivesMarch 27, 2025

    Vint Cerf, an American computer scientist, is widely regarded as one of the founders of the Internet. Since October 2005, he has served as Vice President and Chief Internet Evangelist at Google. Recently, he sat down with Google DeepMind’s Public Policy Director Nicklas Lundblad, for a conversation on AI, its relationship with the Internet, and how both may evolve. The interview took place with Vint in his office in Reston, Virginia, and Nicklas in the mountains of northern Sweden. Behind Vint was an image of the interplanetary Internet system – a fitting backdrop that soon found its way into the discussion.

    I. The relationship between the Internet and AI

    II. Hallucinations, understanding and world models

    III. Density & connectivity in human vs silicon brains

    IV. On quantum & consciousness

    V: Adapting Internet protocols for AI agents

    VI: Final reflections

  • The Conversation AI The Conversation AI

    From Martin LaMonica, former technology journalist and science editor for The Conversation, currently Director of Editorial Projects and Newsletters:

    Dear reader,
    We at The Conversation are keen to know what questions you have about AI and types of stories you want to read.

    To tell us, please fill out this very short questionnaire. I’ll share your responses (no names or emails will be attached) with the editors to help guide our coverage going forward.

    The Conversation AI is different than most newsletters on artificial intelligence. We will, of course, cover how the technology is evolving and its many applications.

    But our editors and expert authors do more – they look broadly at the impact this powerful technology is having on society, whether it’s new ethical and regulatory questions, or changes to the workplace. Also, our academic writers approach this subject from a variety of disciplines and from universities around the world, bringing you a global perspective on this hot issue.

    OnAir Post: The Conversation AI

  • Tech Policy Press Tech Policy Press

  • Axios AI+ Axios AI+

    Taking you inside the AI revolution, and delivering scoops and insights on the technologies reshaping our lives.

    It is co-authored by Axios chief technology correspondent Ina Fried and global technology correspondent Ryan Heath.

    Why it matters: AI’s transformations of industry and society are poised to rival those triggered by the arrival of the internet two decades ago.

    • The Axios tech team has been leading coverage of AI and the revolutionary potential it brings, providing scoops and valuable insights on the latest updates and innovations from industry leaders.

    OnAir Post: Axios AI+

  • Vint Cerf Vint Cerf

    Vinton G. Cerf has served as vice president and chief Internet evangelist for Google since October 2005. In this role, he contributes to global policy development and continued standardization and spread of the Internet. He is also an active public face for Google in the Internet world.

    From 1994 to 2005, Cerf served as the senior vice president of Technology Strategy for MCI. In this role, Cerf was responsible for helping to guide corporate strategy development from the technical perspective. Previously, Cerf served as MCI’s senior vice president of Architecture and Technology, leading a team of architects and engineers to design advanced networking frameworks including Internet-based solutions for delivering a combination of data, information, voice and video services for business and consumer use.

    Source: Internet Hall of Fame

    OnAir Post: Vint Cerf

  • OpenAI OpenAI

  • Sam Altman Sam Altman

    Samuel Harris Altman (born April 22, 1985) is an American entrepreneur and investor best known as the chief executive officer of OpenAI since 2019 (he was briefly dismissed and reinstated in November 2023).  He is also the chairman of clean energy companies Oklo Inc. and Helion Energy.

    Altman is considered to be one of the leading figures of the AI boom. He dropped out of Stanford University after two years and founded Loopt, a mobile social networking service, raising more than $30 million in venture capital. In 2011, Altman joined Y Combinator, a startup accelerator, and was its president from 2014 to 2019. Altman’s net worth was estimated at $1.1 billion in January 2025.

    OnAir Post: Sam Altman

  • Trustworthy AI in Law & Society (TRAILS) Trustworthy AI in Law & Society (TRAILS)

  • Imagining the Digital Future Center Imagining the Digital Future Center

    [This website is a demonstration and is not currently affiliated with the Imagining The Digital Future Center nor Elon University.]

    Imagining the Digital Future (ITDF) Center is an interdisciplinary research center focused on the human impact of accelerating digital change and the socio-technical challenges that lie ahead.

    Imagining the Digital Future’s mission is to discover and broadly share a diverse range of opinions, ideas and original research about the likely evolution of digital change, informing important conversations and policy formation.

    February 29, 2024, ITDF released its first new research report, “The Impact of Artificial Intelligence by 2040,” for which, in two separate studies, a large group of global digital life experts and the U.S. general public were asked to share their opinions on the likely future impact of AI. The global experts predicted that as these tools advance we will have to rethink what it means to be human and we must reinvent or replace major institutions in order to achieve the best possible future. The issues the Americans polled were most concerned about are the further erosion of personal privacy, their opportunities for employment, how these systems might change their relationships with others, and AI applications’ potential impact on basic human rights.

    OnAir Post: Imagining the Digital Future Center

  • AI2 Nexus AI2 Nexus

    George Mason is building a nexus of collaboration and resources on campus, throughout the region with our vast partnerships, and across the state, called AI2Nexus.

    As a model for universities, AI2Nexus is based on four key principles: Integrating AI to transform education, research, and operations; Inspiring with AI to advance higher education and learning for the future workforce; Innovating with AI to lead in responsible AI-enabled discovery and advancements across disciplines; and Impacting with AI to drive partnerships and community engagement for societal adoption and change.

    George Mason University is driving rapid AI adoption and advancements across the Commonwealth.

    As the largest and most diverse university in Virginia, just outside Washington, D.C., George Mason University is leading the future of inclusive artificial intelligence (AI) and developing responsible models for AI research, education, workforce development, and community engagement within a modern university.

    As AI reshapes industries, George Mason combines fearless ideas that harness the technology’s boundless potential to address the world’s grand challenges, while creating guardrails based on informed, transdisciplinary research around ethical governance, regulatory oversight, and social impact.

    Led by the university’s inaugural vice president and chief artificial intelligence officer (CAIO) Amarda Shehu with an AI Visioning Task Force, George Mason is reimagining operational excellence in every facet of the university.

    Source: AI Webpage

    OnAir Post: AI2 Nexus

  • Georgetown University & AI Policy Georgetown University & AI Policy

  • MIT & AI Policy MIT & AI Policy

  • Caltech Center for Science, Society, and Policy Caltech Center for Science, Society, and Policy

Skip to toolbar