Latent Space Podcast: Why AI Researchers Actually Open Up Here


woman using MacBook Air in room

Most AI podcasts tell you what just shipped. Latent Space tells you why it matters — and more importantly, why the people building it made the decisions they did. Hosted by Swyx (shawn wang) and Alessio Fanelli, it’s become something close to required listening for anyone who wants to understand not just the what of the AI moment, but the architecture, the tradeoffs, and the research bets underneath it. In a landscape full of recaps and hot takes, Latent Space consistently goes deeper. If you’ve ever wanted to be a fly on the wall while two technically sharp people interrogate the researchers actually shipping frontier work, this is it.

What Latent Space Actually Is (And Who It’s For)

Latent Space launched in 2022 and quickly built a reputation as the podcast where AI researchers and engineers will say things they won’t say in a company blog post. Swyx brings a practitioner-turned-analyst perspective — he’s deeply embedded in the developer community through his work at Smol.ai and his writing on “learning in public.” Alessio Fanelli is a partner at Decibel Ventures, which means he’s evaluating AI companies for investment while also interviewing the people building the underlying infrastructure. That combination matters: you get research depth plus commercial skepticism in the same conversation.

The audience skews technical. Episodes regularly go two-plus hours and don’t shy away from discussing things like KV cache optimization, mixture-of-experts routing, or why a particular RLHF design choice creates specific failure modes. But the hosts are good enough that a non-engineer CEO paying close attention will still walk away with something concrete. The real sweet spot is ML engineers, AI product managers, and anyone building with these tools professionally — people who need more signal than a news summary but don’t have time to read every paper.

It’s also worth noting the show has a companion newsletter and community (the Latent Space Discord has thousands of members actively discussing episodes). This is less a podcast and more an ongoing research salon that happens to be recorded.

The Conversations That Shaped How the Field Thinks

A few episodes stand out as genuinely formative — not just good interviews, but conversations that surfaced ideas before they became mainstream discourse.

The episode with George Hotz (founder of comma.ai and tinygrad) is a good example of what makes the show distinctive. Hotz is characteristically blunt, and the hosts didn’t pull him back toward safer ground. The conversation covered his skepticism about transformer scaling limits and his view that most AI companies are burning money on compute without sufficiently questioning architectural assumptions. Whether you agree or not, hearing that argument made clearly and challenged in real time is more useful than reading a polished essay version of it.

Their coverage of the Code Interpreter / tools era of GPT-4 was ahead of most outlets in treating it as an architectural moment rather than a feature release. Swyx has written and spoken about the idea that we’re moving from “models” to “systems” — that the interesting unit of analysis is no longer the weights but the scaffolding, memory, and tool access around them. That framing has held up well.

Episodes featuring researchers from EleutherAI, Mistral, and Together AI have been particularly valuable for anyone trying to understand the open-weights ecosystem — the motivations, the real capability gaps versus closed models, and what “open” actually means in practice when training compute is still highly concentrated.

More recently, their coverage of inference-time compute — the idea that letting models “think longer” at inference rather than just training bigger — has tracked closely with what OpenAI shipped with the o1 and o3 series. They were discussing the research underpinnings of this before most people understood why it was a meaningful shift.

The Recurring Themes Worth Paying Attention To

Latent Space doesn’t just interview whoever is making news that week. Over time, a set of recurring intellectual threads have emerged that are worth tracking if you listen regularly.

The “AI Engineer” as a distinct role

Swyx has been one of the more consistent voices arguing that a new professional category is emerging — not a researcher, not a traditional software engineer, but someone who builds products on top of frontier models and understands enough about how they work to make good architectural decisions. The show has effectively become a curriculum for this role. If you listen to 20 episodes, you’ll have a working understanding of RAG, fine-tuning tradeoffs, agent design patterns, evals, and why each matters in production. If you want to go deeper on how researchers explain these concepts from first principles, AI Explained is a YouTube channel worth pairing with the podcast — it decodes many of the same papers the show references.

Evals as the unsolved problem

Multiple episodes have circled back to the fact that we don’t have good ways to measure whether AI systems are actually getting better at the things that matter. Benchmark saturation is a real problem — models get optimized for benchmarks until the benchmarks stop measuring real capability. The show has hosted serious conversations about what better evaluation infrastructure would look like, and this has only become more relevant as models have gotten harder to distinguish on surface-level tasks.

The infrastructure layer as the real battleground

Alessio’s investor lens shows up here. Latent Space pays more attention than most podcasts to the companies building the picks-and-shovels layer: inference providers like Together AI, Fireworks, and Groq, vector database companies, observability tools, and the emerging category of “AI dev tools” broadly. The argument — which has largely been vindicated — is that as models commoditize, the infrastructure and tooling layer captures significant value.

Open vs. closed as a values question, not just a capability one

The show has had thoughtful guests on both sides of this. The open-weights movement (Meta’s Llama series, Mistral, the Falcon models) isn’t just about capability access — it’s about who controls the development trajectory of AI and what happens when safety and capability tradeoffs get made behind closed doors. Latent Space has taken this seriously as a question rather than treating it as obvious that either side is right.

How Latent Space Compares to Other AI Podcasts Worth Your Time

Podcast Best For Technical Depth Guest Quality Frequency
Latent Space AI engineers, builders, investors High Frontier researchers, founders Weekly-ish
Lex Fridman Podcast Long-form big-picture thinking Medium-High (varies) High profile, broad range Irregular
The TWIML AI Podcast ML practitioners, paper readers High Strong academic/research guests Weekly
No Priors AI startup founders, investors Medium Strong founder/

Ty Sutherland

Ty Sutherland is the Chief Editor of AI Rising Trends. Living in what he believes to be the most transformative era in history, Ty is deeply captivated by the boundless potential of emerging technologies like the metaverse and artificial intelligence. He envisions a future where these innovations seamlessly enhance every facet of human existence. With a fervent desire to champion the adoption of AI for humanity's collective betterment, Ty emphasizes the urgency of integrating AI into our professional and personal spheres, cautioning against the risk of obsolescence for those who lag behind. "Airising Trends" stands as a testament to his mission, dedicated to spotlighting the latest in AI advancements and offering guidance on harnessing these tools to elevate one's life.

Recent Posts