Can’t help you understand
A working-practitioner principle, semi-humorous in delivery and entirely serious in substance, about the limits of what an AI assistant can do for its user’s mind.
In one sentence
“Can’t help you understand” is the practitioner’s slogan for a real and important boundary: AI can help you research, develop, draft, summarize, and prototype — but it cannot do the actual work of comprehension on your behalf, because comprehension is a state of your own brain that only your own brain can produce.
Why this term exists
It is easy, especially in the first euphoric weeks of using an agentic AI assistant, to feel as though understanding has been outsourced. The agent reads the paper for you. The agent summarizes the report. The agent answers the question. You file the answer in your head as I get it now.
A few weeks later, you are in a conversation, or a meeting, or a classroom, where the topic comes up. And you discover, painfully, that you do not in fact get it. You have a folder full of summaries the agent produced. You have notes the agent organized. You have a position you can recite. But the underlying grasp — the thing where you can move flexibly through the topic, generate the next step, see the implication, recognize the analogy — is not there. The summaries were not understanding. They were artifacts left behind by understanding the agent did, on your behalf, that did not transfer.
The slogan “can’t help you understand” names this hard limit so practitioners learn to expect it.
What it actually means in practice
The practitioner internalizes a working partition:
- The agent can help you with: retrieval, summarization, organization, drafting, framing, devil’s-advocate counter-arguments, surfacing the literature, generating hypotheses, structuring an outline.
- The agent cannot help you with: the moment of click in your own brain when a thing finally makes sense. That moment requires you to do the work — reading the original carefully, sitting with the difficulty, holding two ideas in your head until you see how they connect, working through an example yourself, sleeping on it.
There is no shortcut. The agent can lay every plank of the bridge. The agent cannot walk you across.
Working example
A representative episode: a graduate student preparing for a comprehensive exam in management theory uses an AI assistant heavily — to summarize papers, generate study notes, build flashcards, even role-play the oral exam. They show up to the exam with the most thoroughly organized study materials of any student in the cohort. They get a question they did not see coming. The materials do not help. The materials never could have helped, because the question required them to generate an answer from a place of understanding that the materials had built around but never built into. They get through it on raw intelligence and partial recall, and they write afterwards: “I had read everything and understood almost nothing.”
The cure was not better materials. It was less reliance on materials. The student’s second pass through their study cycle replaced 70% of the agent-generated summaries with the student’s own scrappy, slow, error-prone notes — and learned more in two weeks of that than in the previous month. The agent could organize the path but not walk it.
Why this matters in a teaching context
For a BBA or MBA classroom, can’t help you understand is a useful counter-narrative to the more aggressive vendor framings of agentic AI as a cognitive replacement. It is also a useful frame for the honest student conversation about how to use these tools well in a course.
A practical rule of thumb to teach students:
- If the goal is to produce an artifact, the agent is enormously helpful.
- If the goal is to know a thing, the agent is a scaffolding tool, not a substitute for the climb.
The two are different. Most coursework is the second kind. Most professional output is the first kind. Confusing them produces students who graduate with portfolios but no education.
A second framing: the same point applies to faculty. An instructor who lets the agent do the understanding part of course preparation — rather than just the organization and drafting parts — produces lectures that sound informed but cannot adapt to a student’s question in real time, because the speaker did not actually do the work of comprehension that real-time adaptation requires.
Trade-offs
- Not all comprehension requires struggle. Some understanding really does come from a well-explained summary, and the agent can produce excellent ones. The principle is not “always do it the hard way.” It is “do not mistake a summary for the act of understanding.”
- The principle is hard to teach in syllabus form. Students often have to feel the gap themselves before the slogan lands. One bad oral exam tends to do it.
- The agent is not the villain. A book can produce the same false sense of understanding if read passively. The agent just makes the failure mode more accessible.
Related and adjacent terms
- Cognitive outsourcing — the broader phenomenon.
- Token angst — the emotional layer that often shows up when this principle is violated.
- Active learning — the pedagogy literature on the same point.
Related entries: Token angst, Naming, Descartes was wrong.