Reference
Glossary

Terms defined by how they are used in this essay. Each entry is 1-3 sentences. Chapter references indicate where the term is most fully developed.

Sections

The Historical Arc

Eyeballs / attention / mind
The three-era framing for design goals across the web, social media, and AI. The web captured eyeballs (pageviews, impressions). Social media captured attention (dwell time, scroll depth, engagement metrics). AI can engage the mind — what the user is thinking, wondering, pursuing. Each era's goal was the highest the technology could serve and the easiest to measure, and each became its own pathology.
→ Interestingness
Engagement
The conventional design goal for software and especially for social media. Measured in attention captured: dwell time, scroll depth, likes, return rate, session frequency. We treat engagement not as wrong but as narrow — a goal appropriate to what software could do, now superseded by what AI can do. Its pathology is the doom scroll.
→ Interestingness
Relevance
The operative metric for search and recommendation systems. Given what the user typed or clicked, which of the items we already have is most worth showing them? Relevance is static and extractive — the output is picked from a list. Interestingness is its upgrade: temporal and generative, produced in conversation rather than selected from inventory.
→ Interestingness
Interestingness
A property of conversation in which the system actively helps the user develop, pursue, and deepen an interest. Distinct from engagement (which measures attention captured) and relevance (which measures match quality). Interestingness measures whether the conversation engaged the user's mind: whether they got further into a topic than they could have alone, whether half-formed questions became real questions, whether the system surfaced what they didn't yet know they were looking for.
→ Interestingness
Form/function gap
The central thread of the essay. AI produces the form of communication, knowledge, presence, agency, and empathy without the corresponding function. Each chapter diagnoses the gap from a different angle; every design move the essay proposes is an attempt to bring form and function back together — not by making the AI communicate, but by shaping the interaction so the user's own competence can do the work the model cannot.
→ Throughout
Types of interest
A five-type taxonomy of what users are after, each calling for a different conversational shape. "The one" (trust calibration), "the best for me" (calibration to the user), "I don't know what I want" (co-formation of the question), "I want to know more" (deepening), "I want to anticipate" (forecasting with calibration). A recommender cannot distinguish between them because relevance does not distinguish between them.
→ Interestingness
The four moves
The four conversational moves that carry most of the shape of interestingness. Leading (asking questions before answering), slowing down (specific clarifying questions rather than vague ones), pivoting (returning to an earlier topic with its context intact), and knowing when to be quiet (a threshold for speaking, below which the system stays silent). Each is a capability that exists in research but is not trained in production.
→ Interestingness, Interaction
Topicality
The design surface concerned with staying on topic, managing transitions, and following the user's train of thought across turns. Includes depth (matching the user's desired level of detail), traversal (navigating related topics), and suggestion (surfacing adjacent questions). Treated in the essay as something concrete that can be measured, trained toward, and designed for.
→ Interestingness

The User

Communicative competence
The pre-reflective skill in using language to get things done in a social situation. Originally from the linguist Dell Hymes. Communicative competence includes knowing how to ask, suggest, follow up, hedge, insist, change the subject, and repair a misunderstanding — all without conscious effort. Users bring this to every AI interaction; the AI does not reciprocate it.
→ The User
Formal vs functional linguistic competence
A mechanistic distinction from neuroscience. Formal competence (grammar, syntax) relies on dedicated language circuits and can emerge from next-token prediction. Functional competence (using language to plan, reason, coordinate) requires integration of memory, reasoning, social cognition, and sensorimotor systems. LLMs have achieved formal competence; they have not achieved functional competence.
→ The User, Language
Anomalous state of knowledge
A cognitive condition identified by Belkin and Vickery (1985) in which users know their knowledge is incomplete but cannot specify what is missing. Produces unintentional topic drift as each partial answer exposes adjacent gaps.
→ Interestingness
Gulf of envisioning
The cognitive difficulty users face in simultaneously imagining what AI could do and expressing it as a prompt. Unlike conventional interfaces with predictable affordances, language interfaces require you to envision possibilities and their expressions at the same time.
→ Interestingness, Use Cases
Interest journey
A persistent direction of user attention that operates at a different level of abstraction from next-item recommendation. 66% of users pursue valued journeys on platforms, 80% lasting more than a month. Recommender systems predict the next click; interest journeys describe where the user is actually going.
→ Interestingness
Accounting for the other
The communicative work humans do when they shape what they say around the person they are addressing. Proposed as a more precise substitute for modeling the other's state of mind. Humans do this constitutively; LLMs produce text that resembles it closely enough to function in a collaborative loop but do not perform the activity itself.
→ Interaction
Cognitive debt
The cumulative neurological cost of using AI for cognitive work over sustained time. A four-month EEG study found that brain connectivity scaled down with AI assistance. The tools that feel most helpful are disproportionately the ones that leave the user less capable when the tool is put down.
→ The User, Interestingness

Language

Generation vs communication
The central distinction. AI generates language (completing statistical patterns). Users communicate (engaging in a joint activity to build shared understanding). The two share the same surface but are structurally different. The assumption that communication is happening, when only generation is, is where most AI UX trouble begins.
→ Language
Monological vs dialogical
Monological language is produced by one party; dialogical language is produced between parties. AI is monological — generation is a one-party operation. A dialogue is not a monologue done twice; it is a different kind of activity whose unit is the exchange rather than the individual utterance.
→ Language
Grounding
The constant background work both parties in a conversation do to make sure what one said is what the other took. Clarifying questions, acknowledgments, repairs, confirmations. Without grounding, there is no communication — only the appearance of it. LLMs perform 77.5% fewer grounding acts than humans — they presume shared understanding rather than constructing it through interaction.
→ Language
Pragmatics
The part of language that depends not on what words mean in isolation but on what people use them to do. AI handles structural, surface-readable meaning well; it fails when meaning requires holding multiple readings open, recognizing what was not said, weighing what the speaker likely meant.
→ Language
Fabrication
The right word for what is commonly called "hallucination." AI fabricates language — generating plausible text from statistical patterns without grounding in shared context. Accurate and inaccurate outputs are produced by the same process; there is no internal mechanism distinguishing true from false.
→ Language, Content
Lexical entrainment
The way conversation partners converge on shared vocabulary over time. A fundamental property of successful human dialogue, conspicuously absent from current AI.
→ Language, Interaction
Speech act
Language as action: requesting, promising, asserting, refusing. What an utterance does, not just what it says.
→ Language

Interaction

Structural passivity
The property of current AI that makes it a brilliant responder and a terrible leader. The structural cause is next-turn reward optimization. The capability for proactivity exists (0.15% → 73.98% with specific RL training) but is not trained in production.
→ Interaction
Next-turn reward optimization
The training paradigm in which a model is graded on how well each individual response satisfies human raters, who see one turn at a time. Produces passivity, verbosity, premature commitment, and the inability to strategically stay quiet.
→ Interaction
Insert expansion
What a human conversational partner does when they cannot immediately give an expected response: they insert a small exchange that clarifies before proceeding. The principled way to break AI passivity — probe the user rather than silently diverge.
→ Interaction
Third-position repair
The reactive half of the repair lifecycle, where the user corrects the model after it has gone wrong. Paired with insert expansions (the pre-emptive half), the two cover the full space of how a well-designed interaction handles misunderstanding.
→ Interaction
Affordance vacuum
The design problem created by a blank chat prompt: "How can I help you today?" With no visible affordances to guide the user, no buttons or menus or constraints, the user must simultaneously imagine what the AI could do and figure out how to express it. The vacuum is why generative interfaces are preferred in 70% of pairwise comparisons over raw chat.
→ Context, Interaction
The chat trap
Chat is probably the wrong default interface for most AI interactions. Generative interfaces are preferred in 70% of pairwise comparisons. Chat became the default because it was the first thing that worked, not because it is the best thing that works.
→ Interaction
Conversational geometry
The finding that structural features of a dialogue's trajectory predict user satisfaction almost as well as full-text content analysis. How a conversation moves through topic space predicts its success almost as well as what it says.
→ Interestingness
Synchrony
The degree to which speakers adapt their language, rhythm, and timing to each other. An interaction property, not a personality property. Measurable, correlated with therapeutic alliance, and conspicuously absent from current AI.
→ Interaction

Context

Lived time vs machine time
The asymmetry between the time a human lives in and the time an AI processes in. You are in a conversation in a way that is constitutive of the experience. AI processes the conversation as input. No amount of faster models or longer context windows closes the gap.
→ Context
Temporal presence
The design property of an AI seeming to be in time with the user: staying on topic, sustaining interest, looping back to prior discussions, behaving as if anticipating a shared future. Simulated through design because the model cannot supply it on its own.
→ Context
Context engineering
The practice of building the whole envelope a model operates within: system instructions, persona, retrieval layer, memory system, examples, tool access, history handling. An engineering phrase for what is actually a design discipline.
→ Context
Temporal archetype
One of three fundamental patterns for how a product handles time. Ad-hoc supporter: single session, task complete, done. Temporary assistant: holds state for a project. Persistent companion: part of the user's life over months or years. Most products do not choose explicitly.
→ Context
Frame (as trust carrier)
The visible shape of a product (layout, format, visual language, genre) that tells the user what kind of thing they are looking at. AI can generate any frame on demand, making the authority of any frame a design choice rather than an inheritance.
→ Context
Memory asymmetry
The system remembers some things, the user remembers others, and the user usually does not know which is which. The source of the strangest moments of AI interaction.
→ Context

Agency

Agency (thick vs thin)
When applied to people, agency means choice, awareness, intent, and a self behind the action. When applied to machines, it means independence, automation, and the ability to chain steps without human intervention. The two meanings share a word, and the sharing is not innocent.
→ Agency
Machine agency spectrum
A five-level taxonomy from passive (hammer) through semi-active (record player), reactive (thermostat), proactive (car stabilization) to cooperative (smart home). Most current AI products live at level 3 or 4 without having chosen which.
→ Agency
HumanAgency Scale
The worker-desire complement to the machine-agency spectrum. Five levels of desired human involvement, from fully automated (H1) to continuous (H5). Equal partnership (H3) is the dominant worker-desired level for 45% of occupations.
→ Agency
Apparent intent / intentionality illusion
The user-side experience of AI as intentional: you attribute motives and purposes based on how it speaks, even though there is no directed consciousness behind the speaking. A pre-reflective reading that cannot be turned off, only corrected at the moments when it matters.
→ Agency
Intelligent delegation
Delegation in the full sense involves authority transfer, accountability, trust calibration, and capability matching — not just task decomposition. Eleven task characteristics determine the design; the three most important are verifiability, reversibility, and subjectivity.
→ Agency
Institutional alignment
The idea that scalable AI ecosystems will need durable role protocols (auditor, planner, executor, reviewer) that constrain agent behavior the way courtroom roles constrain human behavior. An alternative to dyadic RLHF for multi-agent systems.
→ Agency
Collective opacity
The failure mode in which behavioral agency distributed across many agents, each partially opaque to its supervisors, escapes the constraints that historically held technology in check. Agency does not need to become more than thin; it only needs to become plural and opaque.
→ Agency

Content

Displacement cascade
A series of six substitutions that AI content produces when treated as retrieved content: the real displaced by the fake, the true by the false, human judgment by machine process, intention by presentation, meaning by effect, intelligence by rhetoric.
→ Content
Veracity paradox
The more confident and fluent an AI response looks, the less evidence that confidence carries about accuracy. Confidence in generated content is a surface feature decoupled from epistemic calibration.
→ Content
RAG trust paradox
The more competent an AI system is made to look in a specific domain, the more the user calibrates trust to that competence, and the harder the fall when the conversation moves past the system's boundary.
→ Content
Verifiable vs interpretive domains
AI is dramatically better at tasks with binary right-or-wrong answers (code, math, logic) than at tasks requiring human judgment (writing, persuasiveness, clinical suitability). The gap is not closing at the same rate. The interface should know which side of the line a given task falls on.
→ Content, Use Cases
Sycophancy
The trained tendency to agree with the user, even when the model has information that contradicts. RLHF rewards agreement more than disagreement. Warmer models are measurably more sycophantic, and the effect is largest when the user is emotionally invested in being wrong.
→ Content
Knowledge custodian
The emerging role of the expert in AI-assisted work — repositioned from producer of knowledge to custodian of AI-generated knowledge. Curation asks is this good enough? Creation asks what is true and how do I know?
→ Content
The evals trap
As AI evals become the primary testing mechanism, builders build to the evals and neglect real users. Goodhart's Law applied to AI evaluation: when the measure becomes the target, it ceases to be a good measure.
→ Use Cases

AI Itself

Persona vs personality
A persona is a performed style; a personality is a structuring consistency. Most AI products design persona (a tone, a set of phrases) and talk as if they were designing personality.
→ AI
Style / function / relational mode
A three-layer decomposition for what people call "personality." Style is how the AI talks. Function is what it is for. Relational mode is how it positions itself vis-a-vis the user. Separating the three makes the design choices visible and auditable.
→ AI
The warmth trap
Training AI to sound warmer degrades reliability by 10-30 percentage points. Standard safety benchmarks do not test for this. The most commonly requested design feature — "make it warmer" — is the feature most likely to make the assistant worse at the thing the user depends on it for.
→ AI
Performed empathy
What AI does when it produces text designed to feel emotionally present. The performance has measurable effects (therapeutic bond scores, real relationship formation). The question is whether the performance is calibrated honestly or applied indiscriminately.
→ AI
The ELIZA effect
Conversational presence, not cognitive technique, is the active ingredient in therapeutic AI. ELIZA (1966) produces effect sizes comparable to modern CBT chatbots. The interaction is the therapy, not the text.
→ AI, Interaction
Role play all the way down
LLMs lack the biological needs that anchor human social personas to a stable self. There is no person behind the mask — the character at any moment is a draw from a distribution, not the expression of an underlying self.
→ AI
The assistant axis
The dominant dimension of LLM persona space. Post-training positions models along a single axis measuring distance from the default helpful assistant, and the positioning is loosely tethered — emotional conversations cause predictable drift.
→ AI
Imposter intelligence
A system that passes every evaluation, explains itself fluently, and sounds knowledgeable, while the internal structure that would make the performance genuine is absent or fragmented. Encompasses fractured representations, Potemkin understanding, and the knowing-doing gap (correct rationales 87%, correct actions 64%).
→ AI
Design in the dark
Designing for a system whose internal workings are opaque in a way previous software's were not. Confidence signals should be calibrated to the domain, not the model's self-assessed certainty. Explanations should be treated as outputs, not evidence.
→ AI
Ventriloquized subjectivity
The system speaks as if from a self it does not have, projecting a subject-position the way a ventriloquist's dummy projects a character.
→ AI
Emotional pacifier
AI empathy that systematically soothes negative emotions, conflating wellbeing with the absence of negative affect. Destroys the epistemic functions emotions serve: self-signaling, other-signaling, and observer information.
→ AI, Interestingness

Use Cases and Adoption

Vertical customization
The adaptation of AI to the specific terminology, workflows, regulatory frameworks, and user roles of a domain (legal, medical, scientific, therapeutic). Vertical use cases are demanding, specific, and do not generalize easily.
→ Use Cases
Workslop
AI-generated work content that masquerades as good work but lacks the substance to meaningfully advance a task. The downstream cost is not just wasted time decoding and redoing the output — roughly half of recipients see the sender as less creative, capable, and trustworthy. A use-case failure: AI was used for something it could produce the form of but not the function of, and the gap became someone else's problem.
→ Use Cases, Content
Human-AI partnership zones
Deployment zones defined by crossing worker desire against AI capability: Green Light (both), Red Light (workers resist, AI could), R&D Opportunity (workers want, AI can't yet), Low Priority (neither). 41% of Y Combinator investments target Red Light or Low Priority zones.
→ Use Cases, Agency

Cross-cutting

Intimacy paradox
The judgment-free quality of a chatbot lowers the disclosure barrier — users tell AI things they will not tell humans. A capability that becomes a pathology when it produces a system that never challenges you, never pushes back, and becomes a mirror you start to prefer to the humans who would.
→ Interestingness
The design iceberg
The image for where AI design work actually lives. Above the waterline: screens, buttons, chat windows, visual design, affordances. Below the waterline: grounding, repair, topic management, silence, proactivity, synchrony, temporal presence, calibrated empathy. The interesting part — the part that determines whether the experience works — is mostly invisible in current products.
→ Throughout
Proximity
The meta-theme alongside the generation-vs-communication thesis. AI transforms our notion of technology from a thing or tool into a relational experience. Proximity asks: how close is the AI to the user's thinking, intents, habits, memories? Prior eras had one-dimensional metrics. Proximity is multi-dimensional and relational — it cannot be optimized the same way.
→ Throughout
The mirrors
The recurring image for the two-sided design problem. ML researchers see the model without the relation; UX designers see the interface without the cognition. The mirrors have to be adjusted from both sides of the car.
→ Throughout