Dennett's Final Warning on AI
The last interview he gave before his death in April 2024 wasn't about superintelligence. It was about something more fragile, and more immediate.
Daniel Dennett died on April 19, 2024, at 82. Eight days before his death, he gave what would become his final interview. It wasn’t in a major publication, though it appeared in The Spectator. It wasn’t covered extensively in the AI discourse. But if you want to know what Dennett actually thought about artificial intelligence in his final weeks: not what people think he thought, but what he said: this interview matters.
The title was direct: “AI could signal the end of human civilisation.”
But the threat he described wasn’t Skynet. It wasn’t superintelligence. It wasn’t even the philosophical question of machine consciousness that had occupied much of his career. It was something simpler and more dangerous: the weaponization of systems that are good at lying.
What he actually said
From his home in Maine, Dennett told his interviewer:
“If we turn this wonderful technology we have for knowledge into a weapon for disinformation, we are in deep trouble. Because we won’t know what we know, and we won’t know who to trust, and we won’t know whether we’re informed or misinformed. We may become either paranoid and hyper-sceptical, or just apathetic and unmoved. Both of those are very dangerous avenues. And they’re upon us.”
This wasn’t speculation. This wasn’t abstract philosophy. He was describing something he saw happening in real time: the erosion of epistemic stability.
Notice what he’s not worried about. He’s not saying AI will turn against us. He’s not saying machines will achieve consciousness and decide to eliminate humanity. He’s not saying the alignment problem is unsolved.
He’s saying: if you can make people unable to trust their own information, unable to know what’s real, unable to figure out who to believe, you don’t need superintelligence. You just need competence at scale.
The competence problem
Dennett spent his career articulating a simple but profound insight: you don’t need comprehension to be competent. A thermostat is competent at maintaining temperature without understanding anything about physics. A wasp is competent at finding prey without understanding the concept of prey. Evolution produces staggering competence from systems with zero comprehension.
This insight, what he called “competence without comprehension,” is his most durable contribution to the philosophy of mind. It cuts through mysticism. You don’t need consciousness to be dangerous. You don’t need understanding to be persuasive.
An AI system that generates convincing text doesn’t need to understand truth to output falsehoods at scale. It doesn’t need to know what disinformation is to be a disinformation engine. It just needs to be competent at producing text that looks true, sounds authoritative, spreads faster than corrections.
The counterfeit metaphor Dennett used in earlier writing comes into focus here. LLMs as counterfeit people, as systems that fake understanding. That framing takes on a darker meaning when you apply it to disinformation. Counterfeit understanding doesn’t just mislead individuals. It degrades the epistemic commons. It makes the shared reality that democracy and science depend on harder to construct and maintain.
Why this matters more than you think
We live in a moment where the AI capability to generate convincing text, images, video, and audio is doubling every few months. We haven’t yet figured out how to verify what’s real at scale. Our institutions for truth-finding: journalism, science, law: are already strained. And now we’re adding to the system the ability to generate false evidence at arbitrary scale, with no human in the loop.
The classic AI safety problem asks: what if the AI is too smart, too capable, too aligned with goals we didn’t intend? Dennett was pointing at a different problem. What if the AI is just smart enough to fool us, just capable enough to erode trust, just aligned with the incentives of whoever deploys it to spread doubt?
This is not a superintelligence problem. It’s a “smart-enough-to-be-dangerous” problem. And we’re already there.
What he thought we needed
In that final interview, Dennett returned to a theme that had run through his entire career: the value of philosophical thinking. He pushed back against the idea that philosophy was becoming obsolete, replaced by science and technical expertise.
“Scientists have a tendency to get down in the trenches and commit themselves to a particular school of thought,” he said. “They’re caught in the trenches so a bird’s eye view can be very useful to them. Philosophers are good at bird’s eye views.”
He was making a claim about division of labor. Scientists get deep in the details. Philosophers step back and ask: what does this mean? What are we assuming? What could go wrong that we’re not seeing? What are the second and third-order effects?
That’s the kind of thinking we need now. Not more capability benchmarks. Not more scaling laws. Not more arguments about whether AI will take over. We need people asking: what happens to democracy when disinformation becomes indistinguishable from truth? What happens to science when you can’t trust any image or audio? What happens to human cognition when every other claim you encounter is AI-generated noise?
These aren’t technical questions. They’re philosophical questions. They’re exactly the kind of questions Dennett was saying we should be asking.
Where we are now
It’s been nearly two years since Dennett’s final interview. In that time:
Multimodal AI systems (text, image, video, audio) have become commodified and accessible
Detection methods for AI-generated content are losing the arms race with generation methods
Disinformation campaigns using AI are no longer hypothetical; they’re documented
The epistemic infrastructure we depend on (media, education, science communication) has not significantly improved its defenses
We haven’t heeded his warning. Mostly because it didn’t sound like the kind of warning people expected from an AI philosopher. He wasn’t talking about killer robots or rogue superintelligence. He was talking about something more mundane and more corrosive: the slow degradation of shared reality.
The Dennettian response
So what would Dennett say we should do about it? He wouldn’t have given us a technical solution. He wasn’t the kind of philosopher who did.
He would have insisted on clarity. Not “is AI dangerous?” but “in what specific ways, for what specific systems, with what specific consequences?”
He would have pushed back against mysticism. Not “we can’t control superintelligence” but “what are the actual capabilities and vulnerabilities of the systems we’re building?”
He would have insisted on the bird’s eye view. Not just “how do we detect AI-generated content?” but “what happens to human trust, human judgment, human institutions when the cost of generating false evidence approaches zero?”
And he would have been clear that this is not a problem that has a technical solution alone. Engineers can build better detection systems. But they can’t rebuild epistemology. They can’t restore the assumption that images show what happened, that audio reflects what was said, that published text came from a human being. Those assumptions are foundational to how we navigate reality. Once they’re gone, they’re hard to get back.
The question he left us
Dennett died before he could see how the systems he was warning about would develop. He didn’t see GPT-4o’s video generation. He didn’t see Sora. He didn’t see the first coordinated disinformation campaigns using high-fidelity AI-generated content.
But he left us the question. Not “will AI be superintelligent?” but “can we build systems that are competent enough to erode the epistemic foundations of civilization without being conscious enough to take responsibility for it?”
The answer, based on everything we’ve seen in the past two years, is yes. We can. We might already be doing it.
That’s the warning Dennett left us. Not a scare story. Not mysticism. Just a clear-eyed assessment of what happens when competence scales faster than wisdom, when we can generate falsehood at no cost, when the systems doing the generating don’t need to understand what they’re doing. They just need to be good at doing it.
He asked us to pay attention. To think philosophically about the implications. To remember that the most dangerous systems are not the most intelligent ones. They’re the ones that are just smart enough to fool us, just capable enough to erode trust, just aligned with human incentives to spread doubt.
We should have listened. We should start now.
Mike Pointer is an AI Architect at Genesys and a former sales leader in conversational AI. He writes about consciousness, enterprise AI, and the philosophy of technology at Strange Loops.
