Introduction
The first societal harms of language models did not involve bioattacks, chemical weapons development, autonomous cyberattacks, or any of the other exotic flavors of risk focused on by AI safety researchers. Instead, the first harms of generalist artificial intelligence were decidedly more familiar, though no less tragic: teenage suicide. Very few incidents provoke public outcry as readily as harm to children (rightly so), especially when the harm is perceived (rightly or wrongly) to be caused by large corporations chasing profit.
It is therefore no surprise that child safety is one of the most active areas of AI policymaking in the United States. Last year saw dozens of AI child safety laws introduced in states, and this year will likely see well over one hundred such laws. In broad strokes, this is sensible: like all information technologies, AI is a cognitive tool—and children’s minds are more vulnerable than the minds of adults. The early regulations of the internet were also largely passed with the safety of children in mind.
Despite the focus on this issue by policymakers (or perhaps because of it), there is a great deal of confusion as well. In recent months, I have seen friends and colleagues make overbroad statements like, “AI is harmful for children,” or “chatbots are causing a major decline in child mental health.” And of course, there are political actors who recognize this confusion—along with the emotional salience of the topic—and seek to exploit these facts for their own ends (some of those actors are merely self-interested; others understand themselves to be fighting a broader war against AI and associated technologies, and see the child safety issue as a useful entry point for their general point of view).
