25 AI Thinkers Worth Following — and What Each One Is Actually Right About


scrabble tiles spelling out the word leadership on a wooden surface

Most AI content online is either breathless hype or shallow summaries of press releases. If you’ve been trying to actually understand what’s happening with AI — not just consume headlines — you’ve probably noticed that the signal-to-noise ratio is terrible. The people who genuinely understand this technology aren’t always the ones with the biggest megaphones.

This guide is about finding the signal. There’s a relatively small group of researchers, builders, writers, and thinkers whose work consistently helps you understand AI more clearly — where it actually is, where it’s likely going, and what it means for how you work and live. Some of them are famous. Some aren’t. All of them are worth your time.

We’ve organized this by what you’ll get from following each person, not just who they are. Because the goal isn’t to build a list of impressive names — it’s to actually get smarter about AI.

Table of Contents

The Researchers Who Actually Build Things

There’s a meaningful difference between people who theorize about AI and people who have spent years building the systems that actually exist today. The researchers in this section have direct, hands-on experience with the technology — their takes are grounded in something real.

Andrej Karpathy — The Best Explainer in the Field

If you follow one person from this entire list, make it Andrej Karpathy. His background is exceptional: PhD from Stanford, co-founder of OpenAI, former head of Tesla’s Autopilot AI, and now independent. But what makes him essential isn’t his résumé — it’s that he’s the best technical communicator working in AI right now.

His YouTube videos, particularly the “Neural Networks: Zero to Hero” series, take you from first principles through building GPT-level models from scratch. His 2023 talk “State of GPT” — originally delivered at Microsoft Build — remains one of the clearest explanations of how large language models actually work and what their real limitations are. He doesn’t sell you anything. He just explains things.

On X (formerly Twitter) and in his occasional long-form posts, Karpathy writes about the practical realities of working with LLMs — things like the genuine difficulty of getting models to reliably follow instructions, why “vibe coding” (his term, now widely used) is both powerful and dangerous, and where he thinks the field actually is versus where people claim it is. He’s honest about uncertainty in a way that most people with his profile aren’t.

Demis Hassabis — The Long-Game Thinker

Demis Hassabis co-founded DeepMind in 2010, sold it to Google, and has spent the last 14 years building toward what he’s openly described as artificial general intelligence. He’s not a prolific public poster, but when he speaks — in interviews, at conferences, in the occasional long piece — it’s worth paying close attention.

His framing on AlphaFold (the protein structure prediction system that won a Nobel Prize in Chemistry in 2024 for its underlying research contributions) is a useful window into how he thinks: not “look what AI can do,” but “here is a specific scientific problem that was blocking progress for 50 years, and here’s how we solved it.” That problem-first orientation is different from a lot of what you’ll hear in AI discourse.

Hassabis gives long interviews — Lex Fridman, various science podcasts — that are worth treating as reading material rather than background noise. His views on the timeline to AGI are measured and technical, not promotional.

Ilya Sutskever — Quiet, But Worth Watching

Ilya Sutskever co-authored foundational deep learning papers (including the AlexNet paper with Geoffrey Hinton that many consider a turning point for modern AI), co-founded OpenAI, and then left in 2024 to start Safe Superintelligence Inc. (SSI) alongside Daniel Gross and Daniel Levy. He doesn’t post frequently, but his public statements are unusually dense with information.

His departure from OpenAI — and the publicly stated mission of SSI, which is to build safe superintelligence and nothing else — reflects a particular view about where AI development is headed and what the priority should be. Whether you agree with that view or not, it’s a substantive position held by someone who has thought about these systems longer and more deeply than almost anyone. Follow him on X and read anything he publishes carefully.

The Big-Picture Thinkers Worth Taking Seriously

These are people whose primary contribution isn’t technical research but perspective — helping you understand the broader context in which AI development is happening. The best of them are honest about what they don’t know.

Sam Altman — The Optimist in Chief (With Caveats)

Sam Altman is the CEO of OpenAI and, by any measure, one of the most influential people in AI right now. His public communications — blog posts, X posts, interviews — are worth reading carefully, but with a specific lens: he is simultaneously trying to communicate genuinely about AI and leading an organization with enormous commercial and competitive interests.

His 2024 essay “The Intelligence Age” is a useful artifact. It’s genuinely optimistic about what AI could do for humanity, and some of that optimism is probably warranted. It also doesn’t dwell on risks, timelines to various harms, or the ways in which OpenAI’s commercial success shapes how AI development unfolds. Read it for the vision; probe it for what’s missing.

His interviews with Lex Fridman and his appearances at events like Davos are worth watching because you’re getting a direct window into how one of the most consequential decision-makers in this space actually thinks. That’s valuable independent of whether you agree with him.

Peter Diamandis and Salim Ismail — The Exponential Framework

Peter Diamandis (founder of XPRIZE, co-founder of Singularity University) and Salim Ismail (author of “Exponential Organizations”) aren’t AI researchers, but they’ve built a useful intellectual framework for thinking about how exponential technologies develop and diffuse through society. Their work on “6 Ds” (digitization, deception, disruption, demonetization, dematerialization,

Ty Sutherland

Ty Sutherland is the Chief Editor of AI Rising Trends. Living in what he believes to be the most transformative era in history, Ty is deeply captivated by the boundless potential of emerging technologies like the metaverse and artificial intelligence. He envisions a future where these innovations seamlessly enhance every facet of human existence. With a fervent desire to champion the adoption of AI for humanity's collective betterment, Ty emphasizes the urgency of integrating AI into our professional and personal spheres, cautioning against the risk of obsolescence for those who lag behind. "Airising Trends" stands as a testament to his mission, dedicated to spotlighting the latest in AI advancements and offering guidance on harnessing these tools to elevate one's life.

Recent Posts