EP02·Building AI Systems for Imperfect Humans·

Sherry Wu is a professor at CMU whose research sits at the intersection of human-computer interaction and natural language processing. Her work asks how to make AI systems more useful for imperfect humans while also helping humans become more capable in the presence of AI.

In this episode, we explore what it means to build AI systems for imperfect users, how models and scaffolding interact, how AI literacy and CS education are changing, and why more human-centered alignment may require better tools than reward models alone.

Key Takeaways

  • Design for Imperfect Humans: Human-centered AI must work for real users with incomplete knowledge, messy workflows, and changing needs.
  • Models vs. Scaffolding: Better outcomes often come from how systems are structured and presented, not just from raw model quality.
  • Education Is Shifting: AI tools are reshaping both AI literacy and the way computer science is taught.
  • Alignment Needs Better Interfaces: Human-centered alignment may depend on richer evaluation and interaction tools, including checklists and interdisciplinary methods.

Timestamps

  • 0:00 — Teaser
  • 1:13 — Prelude: Introducing Sherry Wu
  • 2:30 — How the AI field has changed in the last four years
  • 4:22 — Making AI systems work for imperfect humans
  • 6:54 — Models vs. scaffolding
  • 10:36 — Understanding human imperfection in teaching contexts
  • 19:28 — AI literacy skills
  • 22:04 — How AI is changing CS education
  • 25:38 — Suppose we have AGI, what does it mean to be human?
  • 29:14 — Training models to be more human-centered
  • 31:46 — Checklists Are Better Than Reward Models
  • 36:56 — Challenge in aligning models
  • 43:22 — Advice for interdisciplinary research
  • 45:37 — Reflection on her own research

References


Subscribe and follow us for more episodes!