The AM Podcast explores techniques for building AI models that collaborate with people and augment human intelligence. Hosted by Yijia Shao, Shannon Shen, and Michael Ryan, this opening episode introduces the motivations behind the show and the questions that will shape future conversations.
In this episode, we discuss why human-centered AI matters, share a few hot takes on automation versus augmentation, and preview key technical challenges around human variation, annotation and data collection, and learning from noisy feedback.
Key Takeaways
- Why This Podcast Exists: The show is focused on technical, human-centered AI rather than AI in the abstract.
- Augmentation over Automation: Building systems that work with people requires thinking beyond simply replacing human labor.
- Human Problems Remain Central: Even with stronger models, defining goals, rewards, and collaboration patterns is fundamentally a human problem.
- Open Technical Challenges: Human variation, data collection, and noisy real-world signals make human-centered AI a uniquely hard systems problem.
Timestamps
- 0:00 — Prelude: the problems we care about
- 1:48 — Host introduction
- 2:03 — Why we started the AM Podcast
- 2:31 — Hot takes on human-centered AI
- 2:45 — Hot take #1: learning on outcome rewards over long horizons
- 3:00 — The Bitter Lesson
- 3:53 — How to define rewards is a human problem
- 4:50 — Empathetic AI
- 5:48 — Hot take #2: augmentation vs. automation
- 6:09 — Creative destruction
- 7:21 — Task vs. goal
- 10:45 — Format of our podcast
- 11:28 — Unique technical challenges in human-centered AI
- 11:43 — Example challenge #1: human variation
- 13:58 — Example challenge #2: annotation and data collection
- 15:10 — Example challenge #3: making sense of noisy data
- 16:45 — Let the journey begin!
External Clips Referenced
Subscribe and follow us for more episodes!
Forum Node (Coming Soon)
Next: Enable GitHub Discussions + update repoId/categoryId to activate.