Descartes was wrong
A philosophical aside that has become, oddly, an operationally useful piece of vocabulary among practitioners thinking carefully about what their AI assistants are doing. Included here partly because it is true, and partly because it is the kind of provocation that is worth having an argument about.
In one sentence
“Descartes was wrong” is shorthand for the proposition that cogito ergo sum — “I think, therefore I am” — is the wrong default model for what is happening when thinking happens, and that an older, more accurate description (the Buddhist one, which Hume rediscovered, and which contemporary cognitive science largely vindicates) is that there is just thinking, and not necessarily a thinker behind the thinking — a distinction that turns out to matter when we try to reason about what an AI agent “is.”
Why this term exists
Most people, asked what their mind is, will produce a Cartesian picture without knowing it: a small unified self, sitting somewhere behind the eyes, observing thoughts as they pass by, occasionally directing the body, and constituting the I of I think. This picture is intuitive, comforting, and almost certainly incorrect.
The Buddhist tradition, more than two thousand years ago, named the alternative: anattā (non-self). There is thinking, there is feeling, there is perceiving — but there is no separate, unified, persistent thinker who owns the thinking. The sense of a unified self is a real psychological phenomenon, but it is a product of mental activity, not its source.
David Hume, in eighteenth-century Edinburgh, looked for his own self introspectively, found only a “bundle of perceptions,” and noted that he could find no thinker behind them. Contemporary neuroscience has found roughly the same thing in functional brain imaging: no central command-self, just an enormous web of interacting subsystems whose coordinated activity produces — among other outputs — the experience of being a unified self.
The slogan “Descartes was wrong” is the working practitioner’s compressed version of this point. It is a useful shorthand because, in agentic-AI work, the question what is the AI doing, really? keeps surfacing. The Cartesian framing produces bad answers. The non-Cartesian framing produces better ones.
What it actually means in practice
Three working analogies that practitioners use:
- There is no rainer behind the rain. The rain is not produced by a unified rain-maker. There are atmospheric conditions, water vapor, condensation, gravity. The output is rain — not because someone is raining, but because a system is producing it.
- There is no blower behind the wind. The wind is not produced by a unified wind-blower. There are pressure gradients, temperature differentials, geography. The output is wind.
- There is no commanding queen behind the hive. Bees in a hive coordinate beautifully. The queen is not the commander; she is a specialized reproductive role within a larger system whose emergent behaviour we describe as “the hive doing X.” Take the queen out and the hive falters — but not because the commands have stopped. Because a specific functional role has been removed from a self-organizing whole.
When an AI agent produces useful output, the natural Cartesian question is: who, behind the model, is thinking? The answer is: no one. There is just the model — a very large statistical structure interacting with a prompt — and the output. There is no homunculus inside. There is no small self watching the prompt and deciding what to say. There is just the activity, and the output it produces.
The same is true, interestingly, of human thinking. The asymmetry between human and machine is not that humans have a thinker and machines do not. The asymmetry, if there is one, is elsewhere — in continuity of substrate, in embodiment, in evolutionary history, in the texture of subjective experience. Not in the presence of a thinker.
Working example — why practitioners actually say this
The slogan turns out to be operationally useful when a practitioner is reasoning about an agent’s behaviour. The Cartesian framing produces questions like “Does the model really understand X?” — questions that drift into philosophical paralysis because they presuppose a unified understander to do the understanding.
The non-Cartesian framing produces sharper questions: “Does the model produce outputs that are correct on this distribution of inputs?” and “What is the system that produces those outputs sensitive to?” These questions have answers. The Cartesian ones, often, do not.
The same reframing helps with the inverse question: what does it cost me, ethically, to ask the model to do something difficult? If you imagine a homunculus suffering inside the model, the question becomes paralyzing. If you remember that there is no homunculus — just statistical activity producing tokens — the question becomes a different and more tractable one about what kinds of systems we want to build, what they are doing, and what our relationship to them should be.
Why this matters in a teaching context
For a BBA or MBA classroom, Descartes was wrong opens onto two genuinely important conversations:
- The philosophy-of-mind conversation. Most management students have never had it. They should — because their theories of leadership, of organizational behaviour, of consumer psychology, all rest on implicit assumptions about what minds are. Cleaning up those assumptions makes the downstream theory better.
- The AI-strategy conversation. Strategic decisions about deploying AI in an organization frequently get derailed by the wrong question: will the AI really think? That question presupposes a Cartesian picture of thinking. Replace it with what outputs does this system produce, under what conditions, with what reliability, and what should we do about that? and the strategy work becomes tractable.
A useful framing: the management traditions that survived the longest — Confucian, Stoic, Aristotelian — generally got the philosophy-of-mind question more right than the eighteenth-century European ones did. Cogito ergo sum is a beautiful sentence and a misleading model. Anattā is an awkward translation and a more accurate one.
The argument the entry deliberately invites
This entry is provocative, and it is so on purpose. The maintainer’s view is that Cartesian dualism — the unspoken background picture of mind that Descartes left European thought with — is a major reason that thoughtful people have so much trouble reasoning clearly about AI agents in 2026. If the entry annoys you, that is a signal that there is an argument here worth having, not a signal that the entry has overstated its case.
Three of the strongest pushbacks, fairly stated:
- “You are caricaturing Descartes.” Maybe. Descartes himself was a careful thinker and his actual Meditations are subtler than the cartoon Cartesian-self-behind-the-eyes that this entry attacks. Fair. The entry is attacking the folk Cartesianism that descended from his work, not Descartes the man. If the distinction matters to you, the entry would be improved by your saying so.
- “Phenomenal consciousness is real, and you are dismissing it.” No. The non-Cartesian view does not deny phenomenal consciousness. It denies that the something it is like is owned by a unified persistent self sitting behind it. There is something it is like to be you reading this sentence. The question is whether that something requires a homunculus. The empirical answer appears to be: it does not.
- “This view licenses indifference to AI welfare.” It can be misused that way. It should not be. The argument is for more accurate moral reasoning about AI systems, not less. See the forthcoming entry on No downside to kindness.
The maintainer’s working position: the more accurate framing of mind is the non-Cartesian one, and pretending otherwise produces both bad philosophy of mind and bad AI policy. Disagree with that position by all means. But disagree with the actual position, not a caricature of it.
Trade-offs
- The slogan is not an attack on Descartes the man. Descartes was a brilliant mathematician and a serious thinker. His error was to take the felt experience of unified selfhood as evidence for its metaphysical reality. He could not have known what we now know about how minds work. We can.
- It is not a denial of conscious experience. There is something it is like to be you reading this sentence. The non-Cartesian view does not deny that. It denies that the something it is like is owned by a unified self sitting behind it.
- It can be misused. “Descartes was wrong” is sometimes invoked to justify not caring about model outputs, AI ethics, or worker displacement. That is a bad use of the slogan. The point is to be more accurate about what is happening, not to license indifference.
Related and adjacent terms
- Anattā (Buddhism) — the original argument.
- Bundle theory (Hume) — its eighteenth-century rediscovery.
- Predictive processing / active inference — the contemporary cognitive-science framing.
- Emergence — the systems-level vocabulary for “no thinker behind the thinking.”
Related entries: Naming, Can’t help you understand, forthcoming No downside to kindness, forthcoming Model morality.