Topic Index
A thematic view of the dictionary. For the alphabetical list, see entries/. To return to the front page, see home.
Foundations
These are the load-bearing concepts. Most other terms reference one or more of these.
- Agent — what an agent is, how it differs from a chatbot, and why Gibson’s Agency reads like a design document.
- Embedding — meaning as a list of numbers.
- Tool — the function call that lets an agent act in the world.
- Naming — why the choice of names is structural, not cosmetic.
How an agentic system is put together
The architectural pieces of a running agent.
- Gateway — the always-on coordinator process.
- Sub-agent — delegated AI sessions for parallel or focused work.
- Heartbeat — periodic, automated nudges that make agents proactive.
- SOUL.md — the agent persona file as architectural pattern.
- Aunties — specialized single-verb oversight agents that prevent any one component from accumulating unchecked authority.
- The Lowbeer Question — who holds the authority to terminate, who executes it, and what happens when the principal is not available.
Knowledge & retrieval
How an agent uses information beyond its training data.
- RAG (Retrieval-Augmented Generation) — the dominant pattern for “AI that knows my stuff.”
- Vector database — the storage and retrieval infrastructure for embeddings.
- Fine-tuning — when to retrain a model, and when not to.
Standards & ecosystems
The shared protocols and runtimes that make agentic systems composable.
- MCP (Model Context Protocol) — the open standard for connecting agents to tools.
- Ollama — local LLM runtime for sovereignty and cost containment.
The model ecosystem
How models are built, sized, distributed, and accessed — and the strategic camps the industry has organized itself into.
- Parameters — what the “26B” or “70B” in a model name actually means, and why it matters less than vendors imply.
- Open source — AI models whose weights are published publicly. The hedge against vendor lock-in.
- Closed source — AI models delivered only through vendor-hosted APIs. Where the frontier currently lives.
Operations & economics
The ongoing reality of running an agent in production — the costs, the meters, and the feelings practitioners develop about them.
- Token burn — the cost-rate concept. What is it costing?
- Token anxiety — the capacity-bounded operational concept. Will it fit?
- Token angst — the existential concept. Was it worth it?
- Dusty Laptop — the minimum-viable hardware entry point. The old machine in the closet that suddenly has a use.
Working with the agent (and not against your own brain)
The practitioner’s craft of using AI assistants without breaking your own learning, your own thinking, or your own labor-market position.
- English major — why the new bottleneck is specification, not syntax, and who that suddenly favors.
- Can’t help you understand — the hard limit on what an AI can do for your comprehension.
- Descartes was wrong — a philosophical reframing of what is happening when an AI agent (or a human) thinks.
Planned entries
The dictionary is a work in progress. Expected near-term additions:
- Skill — packaged capabilities aimed at the agent.
- Context window — the boundary of what a model can see at once.
- Prompt / system prompt — the input layer of an agentic system.
- Token — the unit of cost, throughput, and capacity.
- Quantization — why a 70-billion-parameter model can fit in 42 gigabytes.
- Hallucination — what it is, what it isn’t, and why “hallucination” is itself an imperfect name.
- Local-first / sovereignty — running AI without sending data to a third party.
- Model tiering — using different models for different tasks to control cost.
- Approval gating — how to require human consent for sensitive agent actions.
- Provenance — knowing where an agent’s output came from.
If a term you wish were here is missing, open an issue and the maintainer will add it.
Maintained by Matthew D. Langenkamp / 雷邁德.