Summary

The current US government’s approach to governing AGI is embodied in President Donald Trump’s executive order of January 23, 2025 to “Remove Barriers to American AI Innovation“.

In the absence of comprehensive federal AI legislation, US states are actively shaping AI policy, with initiatives ranging from government AI use guidelines to consumer protection measures and studies on AI’s impact.

OnAir Post: US AI Policy in 2025

News

The Government Knows AGI is Coming
The Ezra Klein ShowMarch 4, 2025 (01:03:00)

Artificial general intelligence — an A.I. system that can beat humans at almost any cognitive task – is arriving in just a couple of years. That’s what people tell me — people who work in A.I. labs, researchers who follow their work, former White House officials. A lot of these people have been calling me over the last couple of months trying to convey the urgency. This is coming during President Trump’s term, they tell me. We’re not ready.

One of the people who reached out to me was Ben Buchanan, the top adviser on A.I. in the Biden White House. And I thought it would be interesting to have him on the show for a couple reasons: He’s not connected to an A.I. lab, and he was at the nerve center of policymaking on A.I. for years. So what does he see coming? What keeps him up at night? And what does he think the Trump administration needs to do to get ready for the A.G.I. – or something like A.G.I. – he believes is right on the horizon?

i

REMOVING BARRIERS TO AMERICAN AI INNOVATION: Today, President Donald J. Trump signed an Executive Order eliminating harmful Biden Administration AI policies and enhancing America’s global AI dominance.

  • President Trump is fulfilling his promise to revoke Joe Biden’s dangerous Executive Order that hinders AI innovation and imposes onerous and unnecessary government control over the development of AI.
  • The Biden AI Executive Order established unnecessarily burdensome requirements for companies developing and deploying AI that would stifle private sector innovation and threaten American technological leadership.
  • Today’s executive order:
    • Revokes the Biden AI Executive Order which hampered the private sector’s ability to innovate in AI by imposing government control over AI development and deployment.
    • Calls for departments and agencies to revise or rescind all policies, directives, regulations, orders, and other actions taken under the Biden AI order that are inconsistent with enhancing America’s leadership in AI.

ENHANCING AMERICA’S AI LEADERSHIP: The United States must act decisively to retain leadership in AI and enhance our economic and national security.

  • This Executive Order establishes the commitment of the United States to sustain and enhance America’s dominance in AI to promote human flourishing, economic competitiveness, and national security.
  • American development of AI systems must be free from ideological bias or engineered social agendas. With the right government policies, the United States can solidify its position as the leader in AI and secure a brighter future for all Americans.
    • The order directs the development of an AI Action Plan to sustain and enhance America’s AI dominance, led by the Assistant to the President for Science & Technology, the White House AI & Crypto Czar, and the National Security Advisor.
    • It further directs the White House to revise and reissue OMB AI memoranda to departments and agencies on the Federal Government’s acquisition and governance of AI to ensure that harmful barriers to America’s AI leadership are eliminated.

CONTINUING PRIORITIZATION OF AI: President Trump has made American leadership in AI a priority and is now building on his actions during his first administration.

  • President Trump signed the first-ever Executive Order on AI in 2019 recognizing the paramount importance of American AI leadership to the economic and national security of the United States.
  • President Trump also took executive action in 2020 to establish the first-ever guidance for Federal agency adoption of AI to more effectively deliver services to the American people and foster public trust in this critical technology.
  • Today’s Executive Order builds upon these past successes and clears a path for the United States to act decisively to retain leadership in AI, rooted in free speech and human flourishing.
The Coming Year of AI Regulation in the States
Tech Policy Press January 7, 2025

merican AI policy in 2025 will almost certainly be dominated, yet again, by state legislative proposals rather than federal government proposals. Congress will have its hands full confirming the new Trump administration, establishing a budget, grappling with the year-end expiration of Trump’s earlier tax cuts, and perhaps even with weighty topics like immigration reform. Federal AI policy is likely to be a lower priority. Thus, statehouses will be where the real action can be found on this vital topic.

In 2024, state lawmakers introduced hundreds of AI policy proposals. Only a small fraction passed, and of those, the vast majority were fairly anodyne, such as creating protections for malicious deepfakes or initiating state government committees to study different aspects of AI policy. Few constituted substantive new regulations. An AI transparency bill in California and a civil-rights-based bill in Colorado are notable exceptions.

In the coming year, expect to see far more major, preemptive AI regulatory proposals. These will look more like European Union regulations than the more modest US proposals that predominated in 2024.

Texas Plows Ahead: Texas’ onerous AI regulation is formally introduced
Hyperdimensional, Dean W. BallJanuary 2, 2025

The Texas Responsible AI Governance Act (TRAIGA) has been formally introduced in the Texas legislature, now bearing an official bill number: HB 1709. It has been modified from its original draft, improving on it in some important ways and worsening in others. In the end, TRAIGA/HB 1709 still retains most of the fundamental flaws I described in my first essay on the bill. It is, by far, the most aggressive AI regulation America has seen with a serious chance at becoming law—much more even than SB 1047, the California AI bill that was the most-discussed AI policy of 2024 before being vetoed in September.

This bill is massive, so I will not cover all its provisions comprehensively. Here, however, is a summary of what the new version of TRAIGA does.

TRAIGA in Brief

The ostensible purpose of TRAIGA is to combat algorithmic discrimination, or the notion that an AI system might discriminate, intentionally or unintentionally, against a consumer based on their race, color, national origin, gender, sex, sexual orientation, pregnancy status, age, disability status, genetic information, citizenship status, veteran status, military service record, and, if you reside in Austin, which has its own protected classes, marital status, source of income, and student status. It also seeks to ensure the “ethical” deployment of AI by creating an exceptionally powerful AI regulator, and by banning certain use cases, such as social scoring, subliminal manipulation by AI, and a few others.

 

What Comes After SB 1047?
Hyperdimensional, Dean W. BallSeptember 30, 2025

I

ntroduction
SB 1047 was vetoed yesterday by California Governor Gavin Newsom. The bill would have imposed a liability regime for large AI models (proponents would say it clarifies existing liability), mandated Know Your Customer (KYC) rules for data centers, created an AI safety auditing industry and an accompanying regulator to oversee that industry, granted broad whistleblower protections to AI company staff, initiated a California-owned public compute infrastructure, and more.

It was a sweeping bill, no matter what how many different ways proponents discovered to say the bill was “light touch.” Indeed, of the major provisions listed above, really only the first (liability) was a subject of significant public debate. The fact that a major issue like data center KYC barely warranted discussion is a signal of just how ambitious SB 1047 was.

Governor Newsom is therefore wise to have vetoed the bill; at the end of the day, it was simply biting off more than it could chew.

If you got your information purely from X, you would assume that Governor Newsom has abandoned AI regulation and is keen to let a thousand flowers bloom. The reality, though, is much more complicated. First, by his own count, the Governor signed 17 other AI-related bills from this legislative session alone. Second, in his veto message, Governor Newsom was clear about his intention to regulate AI even more in the future, including for California to serve as America’s main AI regulator if need be (emphasis added):

About

Web Links

New US AI Policy Examined

Key Insights on President Trump’s New AI Executive Order

Source: Patton Boggs

The Trump EO signals a significant shift away from the Biden administration’s emphasis on oversight, risk mitigation and equity toward a framework centered on deregulation and the promotion of AI innovation as a means of maintaining US global dominance.

Key Differences Between the Trump EO and Biden EO

The Trump EO explicitly frames AI development as a matter of national competitiveness and economic strength, prioritizing policies that remove perceived regulatory obstacles to innovation. It criticizes the influence of “engineered social agendas” in AI systems and seeks to ensure that AI technologies remain free from ideological bias. By contrast, the Biden EO focused on responsible AI development, placing significant emphasis on addressing risks such as bias, disinformation and national security vulnerabilities. The Biden EO sought to balance AI’s benefits with its potential harms by establishing safeguards, testing standards and ethical considerations in AI deployment and deployment.

Another significant shift in policy is the approach to regulation. The Trump EO mandates an immediate review and potential rescission of all policies, directives and regulations established under the Biden EO that could be seen as impediments to AI innovation. The Biden EO, however, introduced a structured oversight framework, including mandatory red-teaming for high-risk AI models, enhanced cybersecurity protocols and monitoring requirements for AI used in critical infrastructure. The Biden administration also directed federal agencies to collaborate in the development of best practices for AI safety and reliability efforts that the Trump EO effectively halts.

February 2025 AI Developments Under the Trump Administration

Source: Covington

White House Issues Request for Information on AI Action Plan

On February 6, the White House Office of Science & Technology Policy (“OSTP”) issued a Request for Information (“RFI”) seeking public input on the content that should be in the White House’s yet-to-be-issued AI Action Plan.  The RFI marks the Trump Administration’s first significant step in implementing the very broad goals in the January 2025 AI EO, which requires Assistant to the President for Science & Technology Michael Kratsios, White House AI & Crypto Czar David Sacks, and National Security Advisor Michael Waltz to develop an “action plan” to achieve the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”  The RFI states that the AI Action Plan will “define the priority policy actions needed to sustain and enhance America’s AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation.”

Specifically, the RFI seeks public comment on the “highest priority policy actions” that should be included in the AI Action Plan and encourages respondents to recommend “concrete” actions needed to address AI policy issues.  While noting that responses may “address any relevant AI policy topic,” the RFI provides 20 topics for potential input.  These topics are general and do not include specific questions or areas where particular input is needed.  The topics include: hardware and chips, data centers, energy consumption and efficiency, model and open-source development, data privacy and security, technical and safety standards, national security and defense, intellectual property, procurement, and export controls.  As of March 13, over 325 comments on the AI Action Plan have been submitted.  The public comment period ends on March 15, 2025.  Under the EO, the finalized AI Action Plan must be submitted to the President by mid-October of 2025.

Vice President JD Vance Outlines U.S. AI Policy Priorities at Paris AI Action Summit

NIST Seeks Public Comment on Cyber AI Profile

President Trump Issues Memorandum on America First Investment Policy

Congress and States Respond to DeepSeek and U.S.-China AI Race

Proposed State Governance

Texas & AI Policy

Source: AI Overview

Texas is actively developing AI policy, with the Texas Responsible AI Governance Act (TRAIGA) aiming to regulate AI systems, particularly those posing “unacceptable risks,” and establish a regulatory framework with requirements for developers, deployers, and distributors of AI systems. 

Here’s a more detailed breakdown of Texas’s AI policy landscape:

Key Legislation & Initiatives:
  • This bill, introduced by State Representative Giovanni Capriglione, aims to establish a comprehensive regulatory framework for AI in Texas. 

    • Focus: Addressing algorithmic discrimination, ensuring data security, and conducting annual impact assessments. 
    • Prohibited Uses: The bill would ban AI systems that manipulate human behavior, engage in social scoring, capture biometric identifiers without consent, or produce deepfakes depicting sexual content. 
    • High-Risk AI Systems: TRAIGA focuses on “high-risk” AI systems used in consequential decisions, such as employment, healthcare, financial services, and criminal justice, requiring mandatory risk assessments, record-keeping, and transparency measures. 
    • Enforcement: The state’s attorney general would enforce the law, with fines of up to $100,000 for certain violations. 
  • TRAIGA proposes the establishment of a Texas AI Council with powers to issue ethical guidelines and rules for AI deployment across the state. 

  • The DIR is also involved in AI policy development, with initiatives like the AI User Group (AI-UG) to educate and promote the use of AI technologies for government services. 

  • The DIR has developed an AI Risk Management Framework to help organizations and individuals foster the responsible design, development, deployment, and use of AI systems. 

  • Governor Abbott’s Ban on Chinese AI Apps:
    Governor Abbott has ordered state agencies to ban Chinese government-based AI and social media apps from all state-issued devices. 

California & AI Policy

Source: Mayer Brown

New California Law Will Require AI Transparency and Disclosure Measures

On September 19, 2024, California Governor Gavin Newsom signed into law the California AI Transparency Act, which will require providers of generative artificial intelligence (AI) systems to: (a) make available an AI detection tool; (b) offer AI users the option to include a manifest disclosure that content is AI generated; (c) include a latent disclosure in AI-generated content; and (d) enter into a contract with licensees requiring them to maintain the AI system’s capability to include such a latent disclosure in content the system creates or alters. The California AI Transparency Act goes into effect January 1, 2026, and is the nation’s most comprehensive and specific AI watermarking law.