I never intended to become an AI report sleuth 🦎, but unbridled curiosity takes you to some funny places. This week I’ve been reading a new batch of AI studies a bit off the beaten track.
- 🏃 Research from Seismic Foundation is a large-scale effort to answer how ordinarily people view AI risks. Download the entire report.
- 🗽 The other that caught my attention is by The Autonomy Institute: Download the entire report.
- Meanwhile, more than 40 researchers from these rival labs co-authored a new paper arguing that the current ability to observe an AI model’s reasoning — via step-by-step internal monologues written in human language — could soon vanish. (see video explanation).
- With the emergence of Amazon Kiro and Reflection AI’s Asimov agent, this State of AI Code Generation Survey report by Stacklok is worth checking out.