"Never Mistake a Failure of Imagination" — A Tribute to Daniel Dennett
On losing a philosopher who taught me how to think about thinking
I learned that Daniel Dennett died on April 19, 2024, embarrassingly late. Months late. The kind of late that makes you wonder how you missed the departure of someone whose ideas have been rattling around in your head for the better part of three decades.
But maybe that’s fitting. Dennett never cared much for ceremony. He cared about getting things right — or at least getting them less wrong, which he’d argue is the only kind of right we ever get.
So here’s my long overdue tribute. Not an obituary. Not a summary of his CV. Just one person’s account of what it’s like to carry a philosopher’s ideas around in your skull while you build the kinds of machines that philosopher spent his life trying to understand.
The Book That Rewired Me
I was a physics student at Ohio University when I picked up Consciousness Explained. I was already halfway through what would have been a philosophy degree — I’d nearly completed the BA alongside my physics work — and I was deep in the questions that pull you in at that age: What is the mind? What is real? What are we?
Physics gave me tools. Philosophy gave me questions. Dennett gave me something rarer: permission to take the questions seriously without mystifying them.
The title alone was an act of intellectual audacity. Consciousness Explained. Not “Consciousness Explored” or “Toward a Theory of Consciousness” — explained. You either loved that confidence or hated it. I loved it. Not because I thought he’d actually wrapped it all up in 511 pages, but because he refused to treat consciousness as something forever beyond our reach. He refused the comfortable shrug of “it’s a mystery.”
His central move was demolishing the Cartesian Theater — that stubborn intuition that somewhere inside your head, there’s a screen where experience plays out for an audience of one. Dennett said no. There’s no inner screen, no central observer, no place where “it all comes together.” Instead, there are Multiple Drafts — parallel streams of processing, competing for influence, with consciousness emerging as something like “fame in the brain.” The contents that win the competition for influence get to be what it’s like to be you at that moment.
It was the first time I’d encountered a theory of consciousness that felt like it could actually be true — not because it was comforting, but because it was mechanistic without being reductive in a cheap way. It took the mystery seriously by trying to dissolve it rather than worship it.The Quote I Carry
Of everything Dennett wrote, one line lodged itself permanently in my thinking:
“Never mistake a failure of imagination for an insight into necessity.”
I’ve leaned on this sentence more times than I can count — in physics seminars, in product meetings, in architecture reviews, in arguments about what AI can and cannot do. It’s a scalpel that cuts through a particular kind of intellectual laziness: the tendency to declare something impossible simply because you can’t picture how it would work.
“Machines will never understand language.” Really? Or can you just not imagine how they would?
“Consciousness requires biological neurons.” Does it? Or is that a failure of your imagination masquerading as a law of nature?
“You can’t build a system that reasons about its own reasoning.” Can’t? Or haven’t yet?
The quote doesn’t say everything is possible. It says: check your work. When you hit a wall of “that can’t be done,” ask whether you’ve discovered a genuine constraint or merely reached the edge of your own creativity. Nine times out of ten, it’s the latter.
For someone who’s spent twenty years in AI — from enterprise search to IBM Watson to the current explosion of large language models — this distinction is everything. The history of AI is littered with premature impossibility claims. Dennett’s razor keeps you honest.Darwin’s Universal Acid
If Consciousness Explained rewired how I think about minds, Darwin’s Dangerous Idea rewired how I think about everything else.
Dennett’s central metaphor was natural selection as a “universal acid” — an idea so powerful it eats through every container you try to put it in. Evolution by natural selection isn’t just a theory about biology. It’s an algorithmic process that applies wherever you have variation, selection, and retention. It explains the design we see in the living world without invoking a designer, and it provides a template for understanding how complex, purposeful-seeming things can emerge from simple, purposeless processes.
This idea haunts AI in ways most practitioners don’t think about. When a large language model produces a response that seems insightful, seems to understand — what’s happening? There’s no ghost in the machine. There’s no homunculus reading the prompt and crafting a reply. There are statistical patterns, learned from billions of examples, selected through training for outputs that score well on human evaluations. Variation, selection, retention. It’s not natural selection exactly, but it rhymes.
Dennett would have recognized the pattern instantly. He spent his career arguing that you don’t need a Central Meaner for meaning to emerge, that you don’t need a soul for a system to behave as if it has purposes. The intentional stance — his idea that we can usefully predict a system’s behavior by attributing beliefs and desires to it — was designed for exactly this moment. When I talk to an LLM, I adopt the intentional stance. I treat it as if it has beliefs, intentions, understanding. And it works. The predictions are useful. The interaction is productive.
But does it really understand? Dennett would tell you that’s the wrong question — or at least that it needs to be asked more carefully than most people ask it. Understanding isn’t a binary. It’s not a light that’s either on or off. It’s a spectrum of competences, and the question isn’t whether the system “truly” understands but what it can do.The Constellation: Dennett, Hofstadter, Smullyan
I can’t write about Dennett without writing about the two thinkers I always shelve next to him in my mind: Douglas Hofstadter and Raymond Smullyan.
These three form a constellation for me — different stars, but part of the same pattern.
Hofstadter gave me strange loops. Gödel, Escher, Bach is the book that taught me self-reference isn’t a parlor trick — it’s the architecture of mind itself. Consciousness, in Hofstadter’s telling, is what happens when a system becomes complex enough to model itself, when the symbol-processing loop turns back on the symbol-processor. A strange loop. An “I” emerging from the machinery that has no “I” in any of its parts.
Where Dennett dissolves the mystery by showing you the mechanism, Hofstadter embraces the mystery by showing you that the mechanism is the mystery. They’re complementary, not contradictory. Dennett says there’s no Cartesian Theater. Hofstadter says yes, but the illusion of a theater is itself the show, and it’s magnificent.
Smullyan is the wild card. A logician, magician, Taoist, and puzzle-maker, Smullyan approaches the same questions from an entirely different angle. The Tao is Silent is nothing like Consciousness Explained — it’s playful, paradoxical, full of koans and gentle contradictions. But it lands in the same territory. Smullyan’s point is that the deepest truths resist direct statement. You have to sneak up on them sideways, through puzzles and jokes and stories that trick you into seeing what you were looking at all along.
Together, they gave me a framework that no single one of them could have provided alone:
Dennett: Consciousness is a natural phenomenon. Explain it.
Hofstadter: Consciousness is a strange loop. Marvel at it.
Smullyan: Consciousness is. Shh.
I think about this trinity constantly in my work. When I’m building a RAG system — retrieval-augmented generation, the architecture that connects language models to real knowledge bases — I’m operating in Dennett’s world. Mechanisms. Pipelines. Measurable performance. But when I step back and think about what’s actually happening when a system retrieves a passage, synthesizes it with a question, and produces an answer that a human finds meaningful… that’s when Hofstadter’s strange loops start whispering. And when I sit with the fundamental weirdness of it all — that we’re building machines that process meaning, and we’re not entirely sure what meaning is — that’s Smullyan’s silence.What Would Dennett Think of LLMs?
I’ve asked myself this question a hundred times since ChatGPT arrived, and I think the answer is: he’d be delighted and cautious in equal measure.
Delighted, because LLMs are the most spectacular demonstration of his philosophical program that has ever existed. Here are systems with no consciousness (probably), no inner experience (almost certainly), no Cartesian Theater (definitely) — and yet they produce behavior that is so sophisticated, so flexible, so language-like, that millions of people instinctively attribute understanding to them. The intentional stance isn’t just useful for predicting their behavior; for most users, it’s irresistible.
This vindicates decades of Dennett’s work. He always argued that the line between “real” understanding and “mere” simulation of understanding is blurrier than the philosophers of mind want to admit. LLMs don’t settle that debate, but they make it impossible to ignore.
Cautious, because Dennett was nobody’s fool. He knew the difference between behavioral sophistication and genuine competence. He coined the term “competence without comprehension” to describe the way natural selection builds organisms that behave as if they understand their environment without any inner understanding at all. LLMs might be the purest example of competence without comprehension we’ve ever built. They can write poetry, debug code, explain quantum mechanics — but whether there’s any there there is exactly the question Dennett spent his life sharpening.
He’d also warn us — as he always did — against the “Surely” move. Surely these systems understand. Surely they don’t. Whenever you hear yourself say “surely,” Dennett taught, stop. That’s where the hard work begins.From Physics to AI: A Strange Loop of My Own
There’s something loopy — in the Hofstadterian sense — about my own trajectory. I started as a physics student drawn to philosophy, fascinated by the question of what minds are and how they work. I ended up building AI systems for a living. Now I spend my days constructing architectures that process language and retrieve meaning, and my evenings wondering what “meaning” means.
When I was at IBM working on Watson, the philosophical questions weren’t academic. We were building a system to answer questions on Jeopardy!, and later to assist doctors in diagnosing cancer. The engineers talked about NLP pipelines and confidence scores. I kept thinking about Dennett. About what it means for a machine to “understand” a question. About whether understanding is the kind of thing that comes in degrees.
I’m still thinking about it. The RAG system I’m building now — inspired by the positional index architecture I worked with at IBM Watson Explorer — is a machine for connecting questions to answers through layers of retrieval, ranking, and generation. It’s beautiful engineering. It’s also a philosophical puzzle wrapped in a technical one. When the system finds the right passage, synthesizes it, and produces an answer that helps someone — has understanding occurred? Where? In the machine? In the user? In the space between them?
Dennett would love that question. He’d love it because it doesn’t have a clean answer, and he never trusted clean answers. He trusted careful thinking about hard problems, which is the only kind of progress he believed in.The Long Goodbye
Daniel Dennett died on April 19, 2024, at the age of 82. He left behind a body of work that changed how we think about thinking, a generation of students who learned that philosophy could be rigorous and adventurous at the same time, and at least one AI architect in Bloomsburg, PA who still hears his voice every time someone says “that’s impossible.”
I never met him. I wish I had. I’d like to think we’d have had a good conversation — about Watson, about LLMs, about consciousness, about the strange loop of building minds while having one. I think he would have been curious. I think he would have asked hard questions. I think he would have listened.
But mostly, I think he would have reminded me of what he always reminded everyone: don’t stop too soon. Don’t declare mystery where there’s merely difficulty. Don’t mistake the limits of your imagination for the limits of reality.
The world is stranger and more explicable than we think. Dennett spent his life showing us both halves of that truth.
I’m going to keep building. And I’m going to keep thinking. That seems like the right tribute.
Mike Pointer is an AI Architect and Sales Leader based in Bloomsburg, PA. He holds an MS in Physics from the University of Pittsburgh and a BS in Physics from Ohio University’s Honors Tutorial College. He has spent 20+ years working in AI and enterprise search, including roles at IBM Watson, Deloitte, and Talkdesk. He is still trying to figure out what consciousness is. You can find him at mrpointer.substack.com.
