EP01·Bridging Human-AI Grounding Gaps·

Omar Shaikh is a Stanford PhD student, HCI and NLP researcher, and author of the award-winning UIST 2025 paper Creating General User Models from Computer Use. His work focuses on closing the human-AI grounding gap by helping systems develop a richer model of the people they assist.

In this episode, we explore better context for AI, the promise of General User Models (GUMs), calibration and confidence in user modeling, mixed-initiative interactions, and the design and privacy challenges of building systems that understand people more deeply.

Key Takeaways

  • The Grounding Gap Matters: AI systems often fail not because they are weak, but because they lack the right context about the user.
  • General User Models: GUMs aim to infer user goals, preferences, and state from computer-use behavior.
  • Calibration and Confidence: Reliable user modeling requires not just predictions, but well-calibrated uncertainty estimates.
  • Mixed Initiative and Privacy: More proactive AI systems can be powerful, but only if they respect privacy, ownership, and user control.

Timestamps

  • 0:00 — Teaser
  • 1:21 — Prelude: Introducing Omar Shaikh
  • 2:07 — Monologue: Better Context for AI
  • 4:22 — Bridging the Human-AI Grounding Gap
  • 6:14 — Confidence scores in General User Models (GUMs)
  • 7:32 — Calibration of General User Models
  • 13:20 — Uses of General User Models
  • 15:01 — Mixed Initiative Interactions
  • 22:10 — Motivation for GUM
  • 25:31 — Tabracadabra: tab everywhere!
  • 27:01 — Design decisions in GUM
  • 28:26 — Designing Interactive Experiences
  • 32:11 — DITTO
  • 33:06 — Work on domains without existing benchmarks
  • 34:45 — Challenges of the GUM project
  • 37:26 — Privacy and data ownership
  • 38:57 — Finetuning a user model
  • 44:09 — Mindblowing GUM inferences
  • 49:02 — Social problems of GUMs
  • 50:27 — GUM as a reflection tool

References


Subscribe and follow us for more episodes!