Timothy B. Lee

ix
From Is the AI “Revolution” Overblown? | Robert Wright & Timothy B. Lee

Summary

Timothy B. Lee writes the newsletter Understanding AI and cohosts the AI Summer podcast. ​

Tim has written about technology, economics, and public policy for more than a decade. Before launching Understanding AI, he wrote for the Washington Post, Vox, and Ars Technica and holds a master’s degree in computer science from Princeton.

OnAir Post: Timothy B. Lee

News

After 50 million miles, Waymos crash a lot less than human drivers
Understanding AI, Timothy B. LeeMarch 27, 2025

But as this happens, it’s crucial to keep the denominator in mind. Since 2020, Waymo has reported roughly 60 crashes serious enough to trigger an airbag or cause an injury. But those crashes occurred over more than 50 million miles of driverless operations. If you randomly selected 50 million miles of human driving—that’s roughly 70 lifetimes behind the wheel—you would likely see far more serious crashes than Waymo has experienced to date.

Federal regulations require Waymo to report all significant crashes, whether or not the Waymo vehicle was at fault—indeed, whether or not the Waymo is even moving at the time of the crash. I’ve spent the last few days poring over Waymo’s crash reports from the last nine months. Let’s dig in.

These experts were stunned by OpenAI Deep Research
Understanding AI, Timothy B. LeeFebruary 24, 2025

Earlier this month, OpenAI released a new product called Deep Research. Based on a variant of the (still unreleased) o3 reasoning model, Deep Research can think for even longer than conventional reasoning models—up to 30 minutes for the hardest questions. And crucially, it can search the web, allowing it to gather information about topics that are too new or obscure to be well covered in its training data.

The coming AI speedup
The success of Deep Research also suggests that there’s a lot of room to improve AI models using “self play.” The big insight of o1 was that allowing a model to “think” for longer leads to better answers. OpenAI’s Deep Research demonstrates that this is true for a wide range of fields beyond math and computer programming.

And this suggests there’s a lot of room for these models to “teach themselves” to get better at a wide range of cognitive tasks. A company like OpenAI or Google can generate training data by having a model “think about” a question for a long time. Once it has the right answer, it can use the answer—and the associated thinking tokens—to train the next generation of reasoning models.

Explaining actual AI products is the core of Timothy B. Lee’s excellent Substack newsletter Understanding AI. Lee, a journalist (and occasional Reason contributor), refreshingly covers AI like a normal newsworthy subject. His articles include a nice range of original reporting on the companies and nonprofits producing AI, service journalism on how ChatGPT compares to Gemini, even-handed analysis of the legal and regulatory questions AI has inevitably provoked, and explainer articles on what even is a large language model.

The biggest takeaway is that, for all the boosterism and doomerism, AI will be a normal-ish technology that will have normal-ish impacts on the world. One of Lee’s best entries is a deep dive into how AI has affected one of the industries where it already predominates: language translation. Turns out that prices for translation have fallen and companies consume more translation services. That’s an unambiguous win for consumers.

About

Source: Website

I’m the co-founder of Full Stack Economics, where I write about macroeconomics, housing, labor markets, and technology.

I was born and raised in Minnesota and graduated from the University of Minnesota. I spent time as a staff writer at the Cato Institute in Washington DC and also spent some time in St. Louis.

I then went to graduate school, studying computer science under Ed Felten. While there, I was a co-creator of RECAP, a software project that helps users liberate documents from PACER, the federal judiciary’s paywalled website for public records. I earned a master’s degree in computer science in 2010.

After grad school I went to work at Ars full-time. I then spent time at the Washington Post and Vox before returning to Ars for a third time in 2017. In 2021, I quit Ars to start Full Stack Economics.

Web Links

Videos

Is the AI “Revolution” Overblown? | Robert Wright & Timothy B. Lee

January 14, 2025 (43:00)
By: Nonzero

0:00 Tim’s got a new podcast, and NonZero is hiring

3:07 Is large language model progress reaching a plateau?

14:56 Some shortcomings of AI today

25:18 Human cognition compared to AI

33:03 The impressive progress of “multimodal” AI

41:09 Heading to Overtime

Tim Lee on the Present and Future of AI and its Implications for Policy

March 7, 2024 (55:00)
By: Mercatus Center

Tim Lee is an independent journalist who formerly worked for the Washington Post, Vox, and Ars Technica, where he covered tech policy, blockchain issues, the future of transportation, and the economy. Tim currently produces the newsletter, Understanding AI, and is also a returning guest to Macro Musings. He rejoins the podcast to talk about AI, automation, and its implications for the macroeconomy and policy. Specifically, David and Tim also discuss the singularism vs physicalism debate, the possible threats posed by AI, how the regulatory landscape will be affected by AI, and a lot more.

Discuss

OnAir membership is required. The lead Moderator for the discussions is . We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
  • Author
    Posts
  • #8826
    Anonymous
    Inactive
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar