News

i
Feature Posts: UN Global Goals
Focus on the UN 17 Sustainable Development Goals & the potential impact of ANI, AGI & ASI

The AI Policy hub uses the 17 Sustainable Development Goals (SDG) that were adopted by the UN in 2015 as a starting point to explore how AGI could advance and accelerate their accomplishment as well as the potential risks.

The aim of these global goals is “peace and prosperity for people and the planet” – while tackling climate change and working to preserve oceans and forests. The SDGs highlight the connections between the environmental, social and economic aspects of sustainable development. Sustainability is at the center of the SDGs, as the term sustainable development implies.

  • Throughout the week, we will be adding to this post articles, images, livestreams, and videos about the latest US issues, politics, and government (select the News tab).
  • You can also participate in discussions in all AGI Policy onAir posts as well as share your top news items and posts (for onAir members – it’s free to join).

OnAir Post: Goals Overview

Why AGI Should be the World’s Top Priority
CIRSD, Jerome C. GlennJune 1, 2025

The international conversation on AI is often terribly confusing, since different kinds of AI become fused under the one overarching term. There are three kinds of AI: narrow, general, and super AI, with some grey areas in between. It is very important to clarify these distinctions because each type has very different impacts and vastly different national and international regulatory requirements.

Without national and international regulation, it is inevitable that humanity will lose control of what will become a non-biological intelligence beyond our understanding, awareness, and control. Half of AI researchers surveyed by the Center for Human Technology believe there is a 10 percent or greater chance that humans will go extinct from their inability to control AI. But, if managed well, artificial general intelligence could usher in great advances in the human condition—from medicine, education, longevity, and turning around global warming to advances in scientific understanding of reality and creating a more peaceful world. So, what should policymakers know and do now to achieve the extraordinary benefits while avoiding catastrophic, if not existential, risks?

High-Level Report on AGI Governance Shared with UN Community
Millennium Project, Mara DiBerardoMay 28, 2025

The High-Level Expert Panel on Artificial General Intelligence (AGI), convened by the UN Council of Presidents of the General Assembly (UNCPGA), has released its final report titled “Governance of the Transition to Artificial General Intelligence (AGI) Urgent Considerations for the UN General Assembly” outlining recommendations for global governance of AGI.

The panel, chaired by Jerome Glenn, CEO of The Millennium Project, includes leading international experts, such as Renan Araujo (Brazil), Yoshua Bengio (Canada), Joon Ho Kwak (Republic of Korea), Lan Xue (China), Stuart Russell (UK and USA), Jaan Tallinn (Estonia), Mariana Todorova (Bulgaria Node Chair), and José Jaime Villalobos (Costa Rica), and offers a framework for UN action on this emerging field.

The report has been formally submitted to the President of the General Assembly, and discussions are underway regarding its implementation. While official UN briefings are expected in the coming months, the report is being shared now to encourage early engagement.

AI CEO explains the terrifying new behavior AIs are showing
CNN, Laura Coates and Judd RosenblattJune 4, 2025 (11:00)

CNN’s Laura Coates speaks with Judd Rosenblatt, CEO of Agency Enterprise Studio, about troubling incidents where AI models threatened engineers during testing, raising concerns that some systems may already be acting to protect their existence. #CNN #News

Meta’s big AI deal could invite antitrust scrutiny
Axios AI+, Dan PrimackJune 11, 2025

Meta reportedly is planning to invest around $14.8 billion for a 49% stake in Scale AI, with the startup’s CEO to join a new AI lab that Mark Zuckerberg is personally staffing.

  • When the news broke yesterday, albeit still unconfirmed by either side, lots of commenters suggested that the unusual structure was to help Meta sidestep antitrust scrutiny.
  • Not so fast.

What to know: U.S. antitrust regulators at the FTC and DOJ do have the authority to investigate non-control deals, even if it’s been rarely utilized.

  • That’s true under both Sections 7 and 8 of the Clayton Act, which focus on M&A and interlocking directorates, respectively.
Getty Images and Stability AI face off in British copyright trial that will test AI industry
Associated Press, Kelvin Chan and Matt O’BrienJune 9, 2025

LONDON (AP) — Getty Images is facing off against artificial intelligence company Stability AI in a London courtroom for the first major copyright trial of the generative AI industry.

Opening arguments before a judge at the British High Court began on Monday. The trial could last for three weeks.

Stability, based in London, owns a widely used AI image-making tool that sparked enthusiasm for the instant creation of AI artwork and photorealistic images upon its release in August 2022. OpenAI introduced its surprise hit chatbot ChatGPT three months later.

Seattle-based Getty has argued that the development of the AI image maker, called Stable Diffusion, involved “brazen infringement” of Getty’s photography collection “on a staggering scale.”

Tech companies have long argued that “fair use” or “fair dealing” legal doctrines in the United States and United Kingdom allow them to train their AI systems on large troves of writings or images. Getty was among the first to challenge those practices when it filed copyright infringement lawsuits in the United States and the United Kingdom in early 2023.

The State AI Laws Likeliest To Be Blocked by a Moratorium
TechPolicy.Press, Cristiano Lima-StrongJune 6, 2025

In the coming weeks, the United States Senate is expected to ramp up consideration of a sprawling budget bill passed by the House that, if adopted, could block states from enforcing artificial intelligence regulations for 10 years.

Hundreds of state lawmakers and advocacy groups have opposed the provision, which House Republicans approved last month as an attempt to do away with what they call a cumbersome patchwork of AI rules sprouting up nationwide that could bog down innovation. On Thursday, Senate lawmakers released a version of the bill that would keep the moratorium in place while linking the restrictions to federal broadband subsidies.

Critics have argued that the federal moratorium — despite carving out some state laws — would preempt a wide array of existing regulations, including rules around AI in healthcare, algorithmic discrimination, harmful deepfakes, and online child abuse. Still, legal experts have warned that there is significant uncertainty around which specific laws would be preempted by the bill.

To that end, one non-profit organization that opposes the moratorium on Friday is releasing new research examining which state AI laws would be most at risk if the moratorium is adopted, which the group shared in advance with Tech Policy Press.

The report by Americans for Responsible Innovation — a 501(c)(4) that has received funding from Open Philanthropy and the Omidyar Network, among others — rates the chances of over a dozen state laws being blocked by a moratorium, from “likely” to “possible” to “unlikely.”

1 big thing: AI is upending cybersecurity
Axios AI+, Sam SabinJune 6, 2025

Generative AI is evolving so fast that security leaders are tossing out the playbooks they wrote just a year or two ago.

Why it matters: Defending against AI-driven threats, including autonomous attacks, will require companies to make faster, riskier security bets than they’ve ever had to before.

The big picture: Boards today are commonly demanding CEOs have plans to implement AI across their enterprises, even if legal and compliance teams are hesitant about security and IP risks.

1 big thing: AI’s crossover moment
Axios AI+, Scott RosenbergJune 5, 2025

AI is hitting multiple tipping points in its impact on the tech industry, communication, government and human culture — and speakers at Axios’ AI+ Summit in New York yesterday mapped the transformative moment.

1. The software business is the first to feel AI’s full force, and we’re just beginning to see what happens when companies start using AI tools to accelerate advances in AI itself.

2. Chatbots are changing how people interact with one another.

3. Government isn’t likely to moderate AI’s risks.

4. Culture makers fear AI will undermine the urge to create.

At the Center for Strategic and International Studies, a Washington, D.C.-based think tank, the Futures Lab is working on projects to use artificial intelligence to transform the practice of diplomacy.

With funding from the Pentagon’s Chief Digital and Artificial Intelligence Office, the lab is experimenting with AIs like ChatGPT and DeepSeek to explore how they might be applied to issues of war and peace.

While in recent years AI tools have moved into foreign ministries around the world to aid with routine diplomatic chores, such as speech-writing, those systems are now increasingly being looked at for their potential to help make decisions in high-stakes situations. Researchers are testing AI’s potential to craft peace agreements, to prevent nuclear war and to monitor ceasefire compliance.

The Defense and State departments are also experimenting with their own AI systems. The U.S. isn’t the only player, either. The U.K. is working on “novel technologies” to overhaul diplomatic practices, including the use of AI to plan negotiation scenarios. Even researchers in Iran are looking into it.

Futures Lab Director Benjamin Jensen says that while the idea of using AI as a tool in foreign policy decision-making has been around for some time, putting it into practice is still in its infancy.

A 10-Year Ban on State AI Laws?!

As you may have heard, the U.S. House of Representatives last week passed the ‘One, Big, Beautiful Bill’, a budget reconciliation bill, which is now with the Senate. One particularly controversial inclusion is a 10-year moratorium on states passing their own AI legislation.

A strong bipartisan coalition has come out against this provision, referred to as preemption. For example, in a recent letter, a group of 40 state attorneys general from both major parties urged Congress to reject the moratorium, warning it “would directly harm consumers, deprive them of rights currently held in many states, and prevent State AGs from fulfilling their mandate to protect consumers”.

Additionally, a new poll by Common Sense Media finds widespread concerns about the potential negative effects of AI, especially on youth, and that 73% of voters across party lines want both states and the federal government to regulate AI. The proposed federal ban itself is unpopular: 59% of voters oppose it, and 52% say it makes them less likely to support the budget bill entirely.

We’ll keep you posted on what happens next!

Total Information Awareness, Rebooted
The Long Memo, William A. FinneganJune 2, 2025

I’ve gotten a fair number of questions about the Planatir news, so let me lay out a few key things to keep in mind.

First:

The U.S. government has had this kind of capability—the ability to know anything about you, almost instantly—for decades.

Yeah. Decades.

Ever wonder how, three seconds before a terrorist attack, we know nothing, but three seconds after, we suddenly know their full bio, travel record, high school GPA, what they had for breakfast, the lap dance they got the night before, and the last time they took a dump?

Yeah. Data collection isn’t the problem. It never has been. The problem is, and always has been, connecting the dots.

The U.S. government vacuums up data 24/7. Some of it legally. Some of it… less so. And under the Trump Regime, let’s be honest—we’re not exactly seeing a culture of legal compliance over at DHS, the FBI, or anywhere else. Unless Pete Hegseth adds a hooker or a media executive to a Signal thread and it leaks, we’re not going to know what they’re doing.

But the safest bet? Assume Title 50 is out the f*ing window.

i

While today’s Artificial Narrow Intelligence (ANI) tools have limited purposes like diagnosing illness or driving a car, if managed well, Artificial General Intelligence (AGI), could usher in great advances in human condition encompassing the fields of medicine, education, longevity, turning around global warming, scientific advancements, and creating a more peaceful world. However, if left unbridled, AGI also has the potential to end human civilization. This book discusses the current status, and provides recommendations for the future, regarding regulations concerning the creation, licensing, use, implementation and governance of AGI.

Based on an international assessment of the issues and potential governance approaches for the transition from ANI of today to future forms of AGI by The Millennium Project, a global participatory think tank, the book explores how to manage this global transition. Section 1 shares the views of 55 AGI experts and thought leaders from the US, China, UK, Canada, EU, and Russia, including Elon Musk, Sam Altman and Bill Gates, on 22 critical questions. In Section 2, The Millennium Project futurist team analyzes these views to create a list of potential regulations and global governance systems or models for the safe emergence of AGI, rated and commented on by an international panel of futurists, diplomats, international lawyers, philosophers, scientists and other experts from 47 countries.

This book broadens and deepens the current conversations about future AI, educating the public as well as those who make decisions and advise others about potential artificial intelligence regulations.

  • Provides international assessments of specific regulations, guardrails, and global governance models
  • Includes contributions from notable experts
  • Compiles the latest thinking on national and global AGI governance from 300 AGI expert
Why AGI Should be the World’s Top Priority
CIRSD, Jerome C. GlennJune 1, 2025

The international conversation on AI is often terribly confusing, since different kinds of AI become fused under the one overarching term. There are three kinds of AI: narrow, general, and super AI, with some grey areas in between. It is very important to clarify these distinctions because each type has very different impacts and vastly different national and international regulatory requirements.

Without national and international regulation, it is inevitable that humanity will lose control of what will become a non-biological intelligence beyond our understanding, awareness, and control. Half of AI researchers surveyed by the Center for Human Technology believe there is a 10 percent or greater chance that humans will go extinct from their inability to control AI. But, if managed well, artificial general intelligence could usher in great advances in the human condition—from medicine, education, longevity, and turning around global warming to advances in scientific understanding of reality and creating a more peaceful world. So, what should policymakers know and do now to achieve the extraordinary benefits while avoiding catastrophic, if not existential, risks?

High-Level Report on AGI Governance Shared with UN Community
Millennium Project, Mara DiBerardoMay 28, 2025

The High-Level Expert Panel on Artificial General Intelligence (AGI), convened by the UN Council of Presidents of the General Assembly (UNCPGA), has released its final report titled “Governance of the Transition to Artificial General Intelligence (AGI) Urgent Considerations for the UN General Assembly” outlining recommendations for global governance of AGI.

The panel, chaired by Jerome Glenn, CEO of The Millennium Project, includes leading international experts, such as Renan Araujo (Brazil), Yoshua Bengio (Canada), Joon Ho Kwak (Republic of Korea), Lan Xue (China), Stuart Russell (UK and USA), Jaan Tallinn (Estonia), Mariana Todorova (Bulgaria Node Chair), and José Jaime Villalobos (Costa Rica), and offers a framework for UN action on this emerging field.

The report has been formally submitted to the President of the General Assembly, and discussions are underway regarding its implementation. While official UN briefings are expected in the coming months, the report is being shared now to encourage early engagement.

Code Dependent
The One Percent Rule, Colin W.P. LewisJune 2, 2025

In an era where the rhetoric of innovation is indistinguishable from statecraft, Code Dependent does not so much warn as it excavates. Madhumita Murgia has not written a treatise. She has offered evidence, damning, intimate, unignorable. Her subject is not artificial intelligence, but the human labor that props up its illusion: not the circuits, but the sweat.

Reading her work is like entering a collapsed mine: you feel the pressure, the depth, the lives sealed inside. She follows the human residue left on AI’s foundations, from the boardrooms of California where euphemism is strategy, to the informal settlements of Nairobi and the fractured tenements of Sofia. What emerges is not novelty, but repetition: another economy running on extraction, another generation gaslit into thinking the algorithm is neutral. AI, she suggests, is simply capitalism’s latest disguise. And its real architects, the data annotators, the moderators, the ‘human-in-the-loop’, remain beneath the surface, unthanked and profoundly necessary.

The subtitle might well have been The Human Infrastructure of Intelligence. The first revelation is that there is no such thing as a purely artificial intelligence. The systems we naively describe as autonomous are, in fact, propped up by an army of precarious, low-wage workers, annotators, moderators, cleaners of the digital gutters. Hiba in Bulgaria. Ian in Kibera. Ala, the beekeeper turned dataset technician. Their hands touch the data that touches our lives. They are not standing at the edge of technological history; they are kneeling beneath it, holding it up. Many of these annotators are casually employed as gig workers by the US$ 15 billion valued Scale.AI.

In a recent study blocking internet on smartphones:

“improved mental health, subjective well-being, and objectively measured ability to sustain attention….when people did not have access to mobile internet, they spent more time socializing in person, exercising, and being in nature.”

Nowhere is this tension more evident than with social technology and Apps (this includes video). The smartphone, a device of staggering power, was meant to amplify human intellect, yet it has become an agent of distraction.

In the grandest act of cognitive bait-and-switch, our age of limitless information has delivered not enlightenment but a generation entranced by an endless stream of digital ephemera, content optimized for transience rather than thought, reaction rather than reflectio

Bill Gates Is Wrong. A New Decade of Human Excellence Is Coming
Luiza’s Newsletter, Luiza JarovskyApril 21, 2025

AI’s Legal and Ethical Challenges 

Bill Gates has been saying that in the next decade, humans won’t be needed for most things, but he’s wrong.

A new decade of human excellence is coming, but not for the reasons most people think.

I think that more and more people will want to see and experience the raw human touch behind human work.

And this is excellent.

Excellent professionals will thrive.

Dean W. Ball new OSTP position
Hyperdimensional, Dean W. BallApril 17, 2025

I am pleased to announce that as of this week, I have taken on the role of Senior Policy Advisor for AI and Emerging Technology at the White House Office of Science and Technology Policy.

It is an honor and a thrill to have been asked to serve my country. Nonetheless, this is bittersweet news to deliver. This role means that I cannot continue to regularly publish Hyperdimensional. I will miss doing so. Over the past 16 months, I have published 75 essays here (not including today’s post), easily spanning the length of a novel. This newsletter’s audience has grown to more than 7,500 exceptionally talented and accomplished people spanning a wide range of countries and fields.

I am perpetually amazed that such a fantastic group of people takes the time to read my writing. Writing Hyperdimensional has been the most fun I’ve ever had in a job. Thank you all for letting me do it.

Hyperdimensional will no longer be a weekly publication. The publication will remain active, however, because I intend write again when I return to the private sector. So I encourage you to remain subscribed; I promise that I will not bother you with extraneous emails, ads, cross-postings, or anything other than original writing by me. I also plan to keep the archive of my past posts active. Please note, though, that all views expressed in past writing, here or elsewhere (including the private governance essay linked at the top of this post), are exclusively my own, and do not necessarily represent or telegraph Trump Administration policy.

DeepMind built an AI that invents better AI: We are entering the Era of AI Experience
The One Percent Rule, Colin W.P. LewisApril 15, 2025

A pivotal moment in the evolution of AI

Despite being written by two of the world’s leading AI developers actively engaged in new efforts, it’s rare for a research paper like the one below not to make headlines.

First, in a short interview David Silver confirms that they have built a system that used Reinforcement Learning (RL) to discover its own RL algorithms. This AI-designed system outperformed all human-created RL algorithms developed over the years. Essentially, Google DeepMind built an AI that invents better AI.

Second, the paper seeks to take AI back to its roots, to the early compulsions of curiosity: trial, error, feedback. David Silver and Richard Sutton, two AI researchers with more epistemological steel than most, have composed a missive that reads less like a proclamation and more like a reorientation, a resetting of AI’s moral compass toward what might actually build superintelligence. They call it “The Era of Experience”, and state

Taking a responsible path to AGI
Google Deep Mind, Anca Dragan et alApril 2, 2025

We’re exploring the frontiers of AGI, prioritizing readiness, proactive risk assessment, and collaboration with the wider AI community.

Artificial general intelligence (AGI), AI that’s at least as capable as humans at most cognitive tasks, could be here within the coming years.

Integrated with agentic capabilities, AGI could supercharge AI to understand, reason, plan, and execute actions autonomously. Such technological advancement will provide society with invaluable tools to address critical global challenges, including drug discovery, economic growth and climate change.

This means we can expect tangible benefits for billions of people. For instance, by enabling faster, more accurate medical diagnoses, it could revolutionize healthcare. By offering personalized learning experiences, it could make education more accessible and engaging. By enhancing information processing, AGI could help lower barriers to innovation and creativity. By democratising access to advanced tools and knowledge, it could enable a small organization to tackle complex challenges previously only addressable by large, well-funded institutions.

Privacy Challenges in the Age of AI, with Daniel Solove
Luiza’s NewsletterApril 18, 2025 (01:04:00)

Prof. Solove is the Eugene L. and Barbara A. Bernard Professor of Intellectual Property and Technology Law at the George Washington University Law School.

A globally acclaimed privacy scholar and expert, he has written numerous seminal books and articles on the subject, is among the most cited legal scholars of all time, and has been shaping the privacy field for over 25 years.

In this talk, we discussed his new book, “On Privacy and Technology,” and hot topics at the intersection of privacy and AI.

“Imagine digitally connected people’s daily lives in the social, political, and economic landscape of 2035. Will humans’ deepening partnership with and dependence upon AI and related technologies have changed being human for better or worse?

Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors?’”

This is the second of four pages with responses to the question above. The following sets of experts’ essays are a continuation of Part I of the overall series of insightful responses focused on how “being human” is most likely to change between 2025 and 2035, as individuals who choose to adopt and then adapt to implementing AI tools and systems adjust their patterns of doing, thinking and being. This web page features many sets of essays organized in batches with teaser headlines designed to assist with reading. The content of each essay is unique;, the groupings are not relevant. Some essays are lightly edited for clarity.

OpenAI’s o3 and Tyler Cowen’s Misguided AGI Fantasy
Marcus on AI, Gary MarcusApril 17, 2025

AI can only improve if its limits as well as its strengths are faced honestly

I’ve noticed this disappointing transformation in Cowen (I used to respect him, and enjoyed our initial conversations in August and November 2021) over the last three years – more or less since ChatGPT dropped and growing steadily worse over time.

More and more his discussions of AI have become entirely one-sided, often featuring over-the-top instantaneous reports from the front line that don’t bear up over time, like one in February in which he alleged that Deep Research had written “a number of ten-page papers [with[ quality as comparable to having a good PhD-level research assistant” without even acknowledging, for example, the massive problem LLMs have with fabricating citations. (A book that Cowen “wrote” with AI last year is sort of similar; it plenty of attention, as a novelty, but I don’t the ideas in it had any lasting impact on economics, whatsoever.)

Hinton vs Musk
Marcus on AI, Gary MarcusApril 3, 2025

History will judge Musk harshly, for many reasons, including what he has done to science (as I discussed here last a few weeks ago).

Brian Wandell, Director of the Stanford Center for Cognitive and Neurobiological Imaging, has described the situation on the ground concisely:

The cuts are abrupt, unplanned, and made without consultation. They are indiscriminate and lack strategic consideration.

Funding for graduate students across all STEM fields is being reduced. Critical staff who maintain shared research facilities are being lost. Research on advanced materials for computing, software for medical devices, and new disease therapies—along with many other vital projects—is being delayed or halted.

Amazon’s New AI: A Privacy Wild West
Luiza’s Newsletter, Luiza Jarovsky April 8, 2025

Earlier today, Amazon launched its new AI model, Nova Sonic. According to the company, it unifies speech understanding and speech generation in a single model, with the goal of enabling more human-like voice conversations in AI-powered applications.

Amazon also highlighted that “Nova Sonic even understands the nuances of human conversation, including the speaker’s natural pauses and hesitations, waiting to speak until the appropriate time, and gracefully handling barge-ins.”

The Internet & AI: An interview with Vint Cerf
AI Policy PerspectivesMarch 27, 2025

Vint Cerf, an American computer scientist, is widely regarded as one of the founders of the Internet. Since October 2005, he has served as Vice President and Chief Internet Evangelist at Google. Recently, he sat down with Google DeepMind’s Public Policy Director Nicklas Lundblad, for a conversation on AI, its relationship with the Internet, and how both may evolve. The interview took place with Vint in his office in Reston, Virginia, and Nicklas in the mountains of northern Sweden. Behind Vint was an image of the interplanetary Internet system – a fitting backdrop that soon found its way into the discussion.

I. The relationship between the Internet and AI

II. Hallucinations, understanding and world models

III. Density & connectivity in human vs silicon brains

IV. On quantum & consciousness

V: Adapting Internet protocols for AI agents

VI: Final reflections

Where We Are Headed
Hyperdimensional, Dean W. BallMarch 27, 2025

The Coming of Agents
First thing’s first: eject the concept of a chatbot from your mind. Eject image generators, deepfakes, and the like. Eject social media algorithms. Eject the algorithm your insurance company uses to assess claims for fraud potential. I am not talking, especially, about any of those things.

Instead, I’m talking about agents. Simply put and in at least the near term, agents will be LLMs configured in such a way that they can plan, reason, and execute intellectual labor. They will be able to use, modify, and build software tools, obtain information from the internet, and communicate with both humans (using email, messaging apps, and chatbot interfaces) and with other agents. These abstract tasks do not constitute everything a knowledge worker does, but they constitute a very large fraction of what the average knowledge worker spends their day doing.

Agents are starting to work. They’re going to get much better. There are many reasons this is true, but the biggest one is the reinforcement learning-based approach OpenAI pioneered with their o1 models, and which every other player in the industry either has or is building. The most informative paper to read about how this broad approach works is DeepSeek’s r1 technical report.

GenAI is never going to disappear. The tools have their uses. But the economics do not and have not ever made sense, relative to the realities of the techonology. I have been writing about the dubious economics for a long time, since my August 2023 piece here on whether Generative AI would prove to be a dud. (My warnings about the technical limits, such as hallucinations and reasoning errors, go back to my 2001 book, The Algebraic Mind, and 1998 article in Cognitive Psychology).

The Future of AI is not GenAI
Importantly, though, GenAI is just one form of AI among the many that might be imagined. GenAI is an approach that is enormously popular, but one that is neither reliable nor particularly well-grounded in truth.

Different, yet-to-be-developed approaches, with a firmer connection to the world of symbolic AI (perhaps hybrid neurosymbolic models) might well prove to be vastly more valuable. I genuinely believe arguments from Stuart Russell and others that AI could someday be a trillion dollar annual market.

But unlocking that market will require something new: a different kind of AI that is reliable and trustworthy.

Career Advice Given AGI, How I’d Start From Scratch
Patel YouTubeMarch 25, 2025 (40:10)

I recorded an AMA! I had a blast shooting the shit with my friends Trenton Bricken and Sholto Douglas.

We discussed my new book, career advice given AGI, how I pick guests, how I research for the show, and some other nonsense.

My book, “The Scaling Era: An Oral History of AI, 2019-2025” is available in digital format now.  https://press.stripe.com/scaling

Google DeepMind CEO Demis Hassabis said that artificial general intelligence (AGI) will compete with human competence in the next five to 10 years, and that it will “exhibit all the complicated capabilities” people have. This could escalate worries over job implications around AI—which is already in motion at companies like Klarna and Workday.

What your coworker looks like is expected to change in the very near future. Instead of humans huddled in office cubicles, people will be working alongside digital colleagues. That’s because Google DeepMind CEO Demis Hassabis said AI will catch up to human capabilities in just a few years—not decades.

“Today’s [AI] systems, they’re very passive, but there’s still a lot of things they can’t do,” Hassabis said during a briefing at Deepmind’s London headquarters on Monday. “But over the next five to 10 years, a lot of those capabilities will start coming to the fore and we’ll start moving towards what we call artificial general intelligence.”

Facing the Future: There are No Publications, Just Communities
Facing the Future, Dana F. BlankenhornMarch 24, 2025

But there is no such thing as a newspaper, a magazine, a TV news channel or even a news website anymore. There is only the Web. If you want to live there, you must build a community within it.

That means doing something I hate, namely specializing. It also means creating a two-way street, like Facebook without the sludge. A safe place for locals to not only vent but connect, emphasis on the word SAFE. You’re about as safe on Facebook as you are on an unlit alleyway behind a strip club after midnight on a weekend.

Once you build a community, you can build another, but it won’t be any cheaper than the first one was. Doing this takes deep learning, expertise, and a desire to serve. The best publishers have always identified with their readers, sometimes to a ridiculous degree. Their business is creating =communities around shared needs, through unbiased journalism and a clear delineation between advertising and editorial.

In a world with over five million podcasts, Dwarkesh Patel stands out as an unexpected trailblazer. At just 23 years old, he has caught the attention of influential figures such as Jeff Bezos, Noah Smith, Nat Friedman, and Tyler Cowen, who have all praised his interviews — the latter describing Patel as “highly rated but still underrated!” Through his podcast, he has created a platform that draws in some of the most influential minds of our time, from tech moguls to AI pioneers.

But of all the noteworthy parts of Patel’s journey to acclaim, one thing stands out among the rest: just how deeply he will go on any given topic.

“If I do an AI interview where I’m interviewing Demis [Hassabis], CEO of DeepMind, I’ll probably have read most of DeepMind’s papers from the last couple of years. I’ve literally talked to a dozen AI researchers in preparation for that interview — just weeks and weeks of teaching myself about [everything].”

Building the Future We Want AI
The One Percent Rule, W.P. Lewis

On my AI courses, I don’t just teach how to build AI; I emphasize understanding what it is. Most importantly, I explore the what and the why. My goal is to leave no stone unturned in the minds of my students and executives, fostering a comprehensive awareness of AI’s potential and its pitfalls.

Crucially, this involves cultivating widespread AI literacy, empowering individuals to responsibly understand, build, and engage with these transformative technologies. Our exploration centers on developing applications that enhance societal well-being, moving beyond the pursuit of mere profit. My AI app for a major bank, designed to assist individuals with vision impairment, exemplifies this philosophy.

This focus on ethical development and human-centered design underscores my conviction that the future of AI depends on our ability to move beyond simplistic narratives and embrace a nuanced understanding of its potential. Whatever we may think of AI, and I have many conflicting thoughts, it is certain that it will foretell our future, so we must learn to shape it and rebuild our humane qualities.