Hyperdimensional
I believe that the federal government should possess a robust capacity to evaluate the capabilities of frontier AI systems to cause catastrophic harms in the hands of determined malicious actors. Given that this is what the US AI Safety Institute does, I believe it should be preserved. Indeed, I believe its funding should be increased. If it is not preserved, the government must rapidly find other ways to maintain this capability. That seems like an awful lot of trouble to go through to replicate an existing governmental function, and I do not see the point of doing so. In the longer term, I believe AISI can play a critical nonregulatory role in the diffusion of advanced AI—not just in catastrophic risk assessment but in capability evaluations more broadly.
Why AISI is Worth Preserving
Shortly after the 2024 election, I wrote a piece titled “AI Safety Under Republican Leadership.” I argued that the Biden-era definition of “AI safety” was hopelessly broad, incorporating everything from algorithmic bias and misinformation to catastrophic and even existential risks. In addition to making it impossible to execute an effective policy agenda, this capacious conception of AI safety also made the topic deeply polarizing. When farcical examples of progressive racial neuroses manifested themselves in Google Gemini’s black Founding Fathers and Asian Nazis, critics could—correctly—say that such things stemmed directly from “AI safety policy.”