The upshot of last week’s analysis is that automated AI research and engineering is already happening to some extent (as OpenAI has demonstrated), but that we don’t quite know what this will mean. The bearish case (yes, bearish) about the effect of automated AI research is that it will yield a step-change acceleration in AI capabilities progress similar to the discovery of the reasoning paradigm. Before that, new models came every 6-9 months; after it they came every 3-4 months. A similar leap in progress may occur, with noticeably better models coming every 1-2 months—though for marketing reasons labs may choose not to increment model version numbers that rapidly.
The most bullish case is that it will result in an intelligence explosion, with new research paradigms (such as the much-discussed “continual learning”) suddenly being solved, a rapid rise in reliability on long-horizon tasks, and a Cambrian explosion of model form factors, all scaling together rapidly to what we might credibly describe as “superintelligence” within a few months to at most a couple of years from when automated AI research begins happening in earnest.
Both of these extreme scenarios strike me as live possibilities, though of course an outcome somewhere in between these seems likeliest. Even in the most bearish scenario, the public policy implications are significant, but the most salient fact for policymakers is the uncertainty itself.
The current capabilities of AI already have significant geopolitical, economic, and national-security implications. Any development whose conservative case is a step-change acceleration of this already rapidly evolving field, and whose bullish case is the rapid development of fundamentally new, meaningfully smarter-than-human AI, has clear salience for policymakers. But what, exactly, should policymakers do?
