AI Policy News

News

How Claude Was An Accomplice To Mass Murder
The Francis Bacon Conspiracy, PatrickMarch 7, 2026

Technical post-mortem of Constitutional AI

On February 28, satellite imagery recorded a girls’ school in Minab, Iran — the Shajareh Tayyebeh primary school — intact at 10:23am local time. By 10:45am it was rubble. A precision weapon had struck the adjacent IRGC Naval compound. Most of the 148 dead were girls aged 7 to 12.

That same day, according to reporting from the Wall Street Journal and Washington Post, an AI system — Claude, built by Anthropic and integrated into Palantir’s Maven Smart System — was generating approximately 1,000 prioritized strike targets, complete with GPS coordinates, weapons recommendations, and automated legal justifications asserting compliance with international humanitarian law.

I want to be precise about what is established fact, what is inference, and what is the deeper question nobody is asking loudly enough ….

Compliant behavior, producing unconscionable outputs, with the paperwork in order.

AI’s Acceleration Paradox
Luiza’s Newsletter, Luiza JarovskyMarch 6, 2026

The AI industry’s acceleration narrative ignores basic facts about the human body, the human mind, human behavior, and human societies. It might drag us to a dystopian future | Edition #278

AI’s Acceleration Paradox

Among the main promises of the AI industry today are productivity and acceleration.

People would use AI to complete more tasks in less time, AI would create and complete more tasks autonomously, and AI would coordinate other AIs to create and complete more tasks autonomously.

As a consequence, the AI industry says, countries would economically benefit from this AI-driven increase in production.

We would enter an era of abundant intelligence in which even the most technically complex challenges would be solved in a short period of time, including, for example, curing all diseases.

The premises above are part of the AI industry’s mainstream discourse today, and they help keep billions of dollars in investments flowing.

However, when you zoom in and look at how humans interact with AI and what AI-powered acceleration actually means, you realize that these promises are based on false premises.

They ignore basic facts about the human body, the human mind, human behavior, and human societies.

Also, these false premises are dragging us toward a dystopian future in which we will be forced to constantly minimize biological boundariesignore psychological needs, and devalue human expression to thrive in a world that prioritizes machines.

Clawed On Anthropic and the Department of War
Hyperdimesional, Dean W. BallMarch 2, 2026

Unfortunately, however, I cannot carry out my job as a writer today with the level of analytic rigor you expect from me without acknowledging that we are sitting in hospice. It is increasingly difficult to honestly discuss the developments of frontier AI, and what kind of futures we should aim to build, without acknowledging our place at the deathbed of the republic as we know it. Except there is no convenient machine to decide for us that the patient has died. We just have to sit and watch.

Our republic has died and been reborn again more than once in America’s history. America has had multiple “foundings.” Perhaps we are on the verge of another rebirth of the American republic, another chapter in America’s continual reinvention of itself. I hope so. But it may be that we have no more virtue or wisdom to fuel such a founding, and that it is better to think of ourselves as transitioning gradually into an era of post-republic American statecraft and policymaking. I do not pretend to know.

I am now going to write about a skirmish between an AI company and the U.S. government. I don’t want to sound hyperbolic about it. The death I am describing has been going on for most of my life. The incident I am going to write about now took place last week, and it may even be halfway satisfyingly resolved within a day.

When AI Safety Collapsed
Metatrends, Peter H. Diamandis March 6, 2026

Competitive pressure destroys voluntary restraint. The company that waits is the company that loses.

And while the safety debate collapses, AI is quietly becoming enterprise infrastructure. Claude isn’t a chatbot anymore, it’s scheduling your workflows at 6 AM without you at the keyboard. Uber employees built an AI clone of their CEO for pitch practice. Burger King deployed AI in employee headsets to monitor whether workers say “please” and “thank you.” And a startup called Pulsia AI is now autonomously running over 1,000 companies simultaneously.

The race didn’t just accelerate this week. It went terminal: meaning we’ve passed the point where anyone can slow it down, even if they wanted to. Let me walk you through what happened, why it matters, and what comes next.

Code Red for Humanity?
Marcus on AI, Gary MarcusFebruary 25, 2026

Anthropic’s showdown with The US Department of War may literally be life or death for all of us.

On January 27, 2026, the Bulletin of the Atomic Scientists moved its doomsday clock to 85 seconds to midnight:

Without going into their arguments in detail, I think it is fairer to say that we are significantly close to the brink four weeks later. I don’t write this happily.

But the juxtaposition of a two things over the last few days has scared the shit out of me.

Item 1: The Trump administration seems hellbent on using AI absolutely everywhere, and seems to be prepared to hold Anthropic (and presumably ultimately other companies) at gunpoint to allow them to use that AI however they damn please, including for mass surveillance and to guide autonomous weapons.

Item 2: These systems cannot be trusted. I have been trying to tell the world that since 2018, in every way I know how, but people who don’t really understand the technology keep blundering forward, ignoring the trust issues that are inherent. Already GenAI appears to have been used in the Maduro raids and to write tariff regulations. And thousands of other places.

The AI And Jobs Debate
The One Percent Rule, Colin W.P. Lewis February 18, 2026

Architects or Auditors? The Real Power Story Behind AI and Work

Anthropic CEO Dario Amodei warns that rapid AI advancements could cause a “painful” disruption to the workforce, potentially eliminating up to 50% of entry-level white-collar jobs in the next 1–5 years. He views AI as a general labor substitute, risking high unemployment (10-20%) and inequality.

Will AI Impact Jobs?

It is clear that AI can, with the right prompt, write research reports, build software, hold a conversation, and diagnose medical conditions. At UniCredit Bank in Italy, the CEO claims that one process, the Credit File for business loans, used to take six weeks; however, they built an AI solution within one week that now delivers results in 14 minutes with 98% accuracy. They expect to reach 100% accuracy soon.

Will it replace human labor? I think we are asking the wrong question. The question is not whether artificial intelligence will impact jobs. It already has and will. The Bureau of Labor Statistics can revise payrolls downward by hundreds of thousands while output holds steady and economists call it productivity. Executives can announce that entry level hiring has cooled in AI exposed sectors while insisting nothing fundamental has changed. Researchers can report that productivity growth has nearly doubled after a decade of stagnation, let’s wait for the data in March. These are not speculations. They are data points. They tell me that something structural is underway.

On Recursive Self-Improvement (Part II) What is the policymaker to do?
Hyperdimesional, Dean W. BallFebruary 12, 2026

The upshot of last week’s analysis is that automated AI research and engineering is already happening to some extent (as OpenAI has demonstrated), but that we don’t quite know what this will mean. The bearish case (yes, bearish) about the effect of automated AI research is that it will yield a step-change acceleration in AI capabilities progress similar to the discovery of the reasoning paradigm. Before that, new models came every 6-9 months; after it they came every 3-4 months. A similar leap in progress may occur, with noticeably better models coming every 1-2 months—though for marketing reasons labs may choose not to increment model version numbers that rapidly.

The most bullish case is that it will result in an intelligence explosion, with new research paradigms (such as the much-discussed “continual learning”) suddenly being solved, a rapid rise in reliability on long-horizon tasks, and a Cambrian explosion of model form factors, all scaling together rapidly to what we might credibly describe as “superintelligence” within a few months to at most a couple of years from when automated AI research begins happening in earnest.

Both of these extreme scenarios strike me as live possibilities, though of course an outcome somewhere in between these seems likeliest. Even in the most bearish scenario, the public policy implications are significant, but the most salient fact for policymakers is the uncertainty itself.

The current capabilities of AI already have significant geopolitical, economic, and national-security implications. Any development whose conservative case is a step-change acceleration of this already rapidly evolving field, and whose bullish case is the rapid development of fundamentally new, meaningfully smarter-than-human AI, has clear salience for policymakers. But what, exactly, should policymakers do?

The AI Patchwork Emerges An update on state AI law in 2026 (so far)
Hyperdimensional, Dean W. BallJanuary 15, 2026

State legislative sessions are kicking into gear, and that means a flurry of AI laws are already under consideration across America. In prior years, the headline number of introduced state AI laws has been large: famously, 2025 saw over 1,000 state bills related to AI in some way. But as I pointed out, the vast majority of those laws were harmless: creating committees to study some aspect of AI and make policy recommendations, imposing liability on individuals who distribute AI-generated child pornography, and other largely non-problematic bills. The number of genuinely substantive bills—the kind that impose novel regulations on AI development or diffusion—was relatively small.

In 2026, this is no longer the case: there are now numerous substantive state AI bills floating around covering liability, algorithmic pricing, transparency, companion chatbots, child safety, occupational licensing, and more. In previous years, it was possible for me to independently cover most, if not all, of the interesting state AI bills at the level of rigor I expect of myself, and that my readers expect of me. This is no longer the case. There are simply too many of them.

It’s not just the topics that vary. It’s also the approaches different bills take to each topic. There is not one “algorithmic pricing” or “AI transparency” framework; there are several of each.

The Skynet Fallacy: Fluent in Drama, Illiterate in Systems
The One Percent Rule, Colin W.P. Lewis December 23, 2025

A Failure of Attention

In this world, the most consequential scenes would not involve violence or revelation. They would involve appeals that go unanswered, errors that cannot be traced, and decisions that arrive without explanation. That is difficult drama. It resists heroes. It resists endings. But it is precisely the story that now demands to be told.

The result is a cultural archive that is vast and repetitive at the same time. Even when television finally names the condition directly, showing worlds organized around continuous evaluation and social credit, the horror is not death but a low rating. Characters are not hunted. They are deprioritized. Lives contract through friction rather than force. We have imagined thousands of artificial beings and almost no artificial bureaucracies. We have rehearsed rebellion endlessly and accountability hardly at all.

What is needed now is not restraint of imagination but redirection of attention. Better questions rather than louder warnings. How do systems age. How do they accrete power. How do they absorb human labor while presenting themselves as autonomous. How do they shift legal norms without formal debate. How do we cross-examine a proprietary trade secret in a court of law? These are not cinematic questions. They are civic ones.

We are telling the wrong stories at the wrong scale. And until that changes, governance will continue to chase spectacle while the real machinery hums along, unbothered.

The final recognition is not a climax. It is a realization of inertia. It feels closer to resignation, or vertigo.

It is a failure of attention.

Six (or seven) predictions for AI 2026 from a Generative AI realist
Marcus on AI, Gary MarcusDecember 20, 2025

2025 turned out pretty much as I anticipated. What comes next?

AGI didn’t materialize (contra predictions from Elon Musk and others); GPT-5 was underwhelming, and didn’t solve hallucinations. LLMs still aren’t reliable; the economics look dubious. Few AI companies aside from Nvidia are making a profit, and nobody has much of a technical moat. OpenAI has lost a lot of its lead. Many would agree we have reached a point of diminishing returns for scaling; faith in scaling as a route to AGI has dissipated. Neurosymbolic AI (a hybrid of neural networks and classical approaches) is starting to rise. No system solved more than 4 (or maybe any) of the Marcus-Brundage tasks. Despite all the hype, agents didn’t turn out to be reliable. Overall, by my count, sixteen of my seventeen “high confidence” predictions about 2025 proved to be correct.

Here are six or seven predictions for 2026; the first is a holdover from last year that no longer will surprise many people.

  1. We won’t get to AGI in 2026 (or 7). At this point I doubt many people would publicly disagree, but just a few months ago the world was rather different. Astonishing how much the vibe has shifted in just a few months, especially with people like Sutskever and Sutton coming out with their own concerns.
  2. Human domestic robots like Optimus and Figure will be all demo and very little product. Reviews by Joanna Stern and Marques Brownle of one early prototype were damning; there will be tons of lab demos but getting these robots to work in people’s homes will be very very hard, as Rodney Brooks has said many times.
  3. No country will take a decisive lead in the GenAI “race”.
  4. Work on new approaches such as world models and neurosymbolic will escalate.
  5. 2025 will be known as the year of the peak bubble, and also the moment at which Wall Street began to lose confidence in generative AI. Valuations may go up before they fall, but the Oracle craze early in September and what has happened since will in hindsight be seen as the beginning of the end.
  6. Backlash to Generative AI and radical deregulation will escalate. In the midterms, AI will be an election issue for first time. Trump may eventually distance himself from AI because of this backlash.

And lastly, the seventh: a metaprediction, which is a prediction about predictions. I don’t expect my predictions to be as on target this year as last, for a happy reason: across the field, the intellectual situation has gone from one that was stagnant (all LLMs all the time) and unrealistic (“AGI is nigh”) to one that is more fluid, more realistic, and more open-minded. If anything would lead to genuine progress, it would be that.

Dice in the Air: A look back at 2025, and a look ahead
Hyperdimesional, Dean W. BallDecember 19, 2025

Has my work been too laissez-faire or too technocratic? Have I failed to grasp some fundamental insight? Have I, in the mad rush to develop my thinking across so many areas of policy, forgotten some insight that I once had? I do not know. The dice are still in the air.

One year ago my workflow was not that different than it had been in 2015 or 2020. In the past year it has been transformed twice. Today, a typical morning looks like this: I sit down at my computer with a cup of coffee. I’ll often start by asking Gemini 3 Deep Think and GPT-5.2 Pro to take a stab at some of the toughest questions on my mind that morning, “thinking,” as they do, for 20 minutes or longer. While they do that, I’ll read the news (usually from email newsletters, though increasingly from OpenAI’s Pulse feature as well). I may see a few topics that require additional context and quickly get that context from a model like Gemini 3 Pro or Claude Sonnet 4.5. Other topics inspire deeper research questions, and in those cases I’ll often designate a Deep Research agent. If I believe a question can be addressed through easily accessible datasets, I’ll spin up a coding agent and have it download those datasets and perform statistical analysis that would have taken a human researcher at least a day but that it will perform in half an hour.

Around this time, a custom data pipeline “I” have built to ingest all state legislative and executive branch AI policy moves produces a custom report tailored precisely to my interests. Claude Code is in the background, making steady progress on more complex projects.

6 reasons why “alignment-is-hard”
Less/Wrong, Steven ByrnesDecember 3, 2025

Tl;dr

AI alignment has a culture clash. On one side, the “technical-alignment-is-hard” / “rational agents” school-of-thought argues that we should expect future powerful AIs to be power-seeking ruthless consequentialists. On the other side, people observe that both humans and LLMs are obviously capable of behaving like, well, not that. The latter group accuses the former of head-in-the-clouds abstract theorizing gone off the rails, while the former accuses the latter of mindlessly assuming that the future will always be the same as the present, rather than trying to understand things. “Alas, the power-seeking ruthless consequentialist AIs are still coming,” sigh the former. “Just you wait.”

As it happens, I’m basically in that “alas, just you wait” camp, expecting ruthless future AIs. But my camp faces a real question: what exactly is it about human brains[1] that allows them to not always act like power-seeking ruthless consequentialists? I find that existing explanations in the discourse—e.g. “ah but humans just aren’t smart and reflective enough”, or evolved modularity, or shard theory, etc.—to be wrong, handwavy, or otherwise unsatisfying.

So in this post, I offer my own explanation of why “agent foundations” toy models fail to describe humans, centering around a particular non-“behaviorist” RL reward function in human brains that I call Approval Reward, which plays an outsized role in human sociality, morality, and self-image. And then the alignment culture clash above amounts to the two camps having opposite predictions about whether future powerful AIs will have something like Approval Reward (like humans, and today’s LLMs), or not (like utility-maximizers).e

RAISE-ing the Bar for AI Companies
Future of Life Institute Media, Maggie MunroSeptember 4, 2025

→ Support the RAISE Act: The New York state legislature recently passed the RAISE Act, which now awaits Governor Hochul’s signature. Similar to the sadly vetoed SB 1047 bill in California, the Act targets only the largest AI developers, whose training runs exceed 10^26 FLOPs and cost over $100 million. It would require this small handful of very large companies to implement basic safety measures and prohibit them from releasing AI models that could potentially kill or injure more than 100 people, or cause over $1 billion in damages.

Given federal inaction on AI safety, the RAISE Act is a rare opportunity to implement common-sense safeguards. 84% of New Yorkers support the Act, but the Big Tech and VC-backed lobby is likely spending millions to pressure the governor to veto this bill.

Every message demonstrating support for the bill increases its chance of being signed into law. If you’re a New Yorker, you can tell the governor that you support the bill by filling out this form.

ChatGPT-Supported Murder · AI Chatbot Catastrophes on the Rise
Luiza Jarovsky’s Newsletter, Luiza JarovskyAugust 31, 2025

. The news you cannot miss:

  • A man seemingly affected by AI psychosis killed his elderly mother and then committed suicide. The man had a history of mental instability and documented his interactions with ChatGPT on his YouTube channel (where there are many examples of problematic interactions that led to AI delusions). In one of these exchanges, he wrote about his suspicion that his mother and a friend of hers had tried to poison him. ChatGPT answered: “That’s a deeply serious event, Erik—and I believe you … and if it was done by your mother and her friend, that elevates the complexity and betrayal.” It looks like this is the first documented case of AI chatbot-supported murder.
  • Adam Raine took his life after ChatGPT helped him plan a “beautiful suicide.” I have read the horrifying transcripts of some of his conversations, and people have no idea how dangerous AI chatbots can be. Read my article about this case.
  • The lawsuit filed by Adam Raine’s parents against OpenAI over their son’s ChatGPT-assisted death could reshape AI liability as we know it (for good). Read more about its seven causes of action against OpenAI here.

Discuss

OnAir membership is required. The lead Moderator for the discussions is onAir Curators. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar