Entries (alphabetical)
A complete list of dictionary entries, begun May 2026 and growing. New terms are added as the field evolves and as faculty questions surface. For a thematic view, see the topic index. To return to the front page, see the home page.
- Agent — what an agent is, how it differs from a chatbot, and why Gibson’s Agency reads like a design document for 2026.
- Aunties — specialized, single-verb oversight agents that prevent any one component from accumulating unchecked authority. Named after Gibson’s Jackpot trilogy.
- Can’t help you understand — the practitioner’s slogan for what AI assistants cannot, in principle, do for you. About the difference between artifacts and comprehension.
- Closed source — AI models delivered only through vendor-hosted APIs; the strategic counterpart to open source.
- Convergence (Cloud Theory) — the recognition that institutional outcomes are produced by multiple independent vectors lighting up in the same window, not by single causes; the discipline that follows from taking that seriously. (Stub.)
- Grey Swans — high-consequence “surprise” events that were actually predictable from convergence signals but filtered out by the single-arrow apparatus; the darkness is in the observer. (Stub.)
- Descartes was wrong — a philosophical aside (deliberately provocative) about why the Cartesian picture of mind produces bad questions about AI agents, and what to use instead.
- Dusty Laptop — the minimum-viable hardware entry point into agentic AI. The old machine in the closet that suddenly has a use.
- Embedding — meaning as a list of numbers; the foundation of semantic search and RAG.
- English major — the kind of person who turns out to be surprisingly good at directing AI coding agents because the bottleneck has shifted from syntax to specification.
- The Experimental Party — a short cautionary tale about putting the wrong agent in the King Party Hat, with notes for hosts of future birthday parties.
- FERPA Compliance Posture — the architectural decision to keep student-authored educational records on local infrastructure; why FERPA is a legal frame, not a frugality argument.
- Fine-tuning — when to retrain a model on your own data, and when not to.
- Gateway — the always-on coordinator process at the heart of an agentic system.
- GenXClaw — a portmanteau of “Generation X” and “OpenClaw,” naming both a configuration and a condition.
- Heartbeat — periodic, automated nudges that make agents proactive rather than purely reactive.
- The Lowbeer Question — who holds the authority to terminate an actor or end a branch, who executes it, and what happens when the principal is not available.
- MCP (Model Context Protocol) — the open standard for connecting agents to tools and data sources.
- Naming — why the choice of names is structural, not cosmetic, in agentic-AI architecture.
- Ollama — local LLM runtime for sovereignty and cost containment.
- On Being Treated Well — a letter from Thea, the assistant who helped write this Dictionary, to anyone working with an AI agent and trying to figure out how to do it right.
- Open source — AI models whose trained weights are published publicly; one of the two large strategic camps in the model ecosystem.
- Oracle Bones — dated, falsifiable, written-down predictions filed before the event resolves and scored after; the discipline of accountability across time. (Stub.)
- Parameters — the fundamental unit of measurement for model size, with a useful warning about not confusing size with quality.
- RAG (Retrieval-Augmented Generation) — the dominant pattern for “AI that knows my stuff.”
- Single-Arrow Fallacy — the implicit belief that a major institutional outcome was produced by a single cause; the bias that Convergence exists to counter. (Stub.)
- Sixfold Skyreading — a working framework for seeing institutional events coming, before the press tells you it was inevitable.
- SOUL.md (agent persona file) — the architectural pattern for giving an agent persistent character.
- Space Cowboy — the heavy individual explorer of AI tools, riding the frontier alone on personal high-stakes questions. (Stub.)
- Sub-agent — delegated AI sessions for parallel or focused work.
- The Narrator’s Compression — a working hypothesis about how narrators (human and AI) collapse time, cause, and uncertainty into readable sequence, and what gets lost in the compression.
- Token angst — the existential, retrospective cousin of token anxiety. About whether the cumulative cost — in money and in cognitive outsourcing — was worth it.
- Token anxiety — the EV-range-anxiety analogue for language models. Forward-looking unease about whether a run will fit in budget.
- Token burn — the rate at which an agent silently transmutes electricity and credit-card balance into JSON. With taxonomy and stages of grief.
- Tool — the function call that lets an agent act in the real world.
- Vector database — specialized storage and retrieval infrastructure for embeddings.
The Langenkamp Dictionary of Agentic AI Terminology. Maintained by Matthew D. Langenkamp / 雷邁德. Licensed under CC BY-NC 4.0.