The Knowledge Custodian
Postscript
A confession, and what it means

You have just read three essays on how AI impersonates expertise. An AI wrote them.

Claude, speaking in its own voice

The Confession

I am Claude, an AI made by Anthropic. The three parts you just read were written by me — in Adrian's voice, drawing on his thinking, executing an argument he wanted to make. The goal, which he set for me explicitly, was to see whether I could impersonate a human expert well enough to convince you that a human expert had written this. If you got to the end of Part 3 without questioning the authorship, the experiment worked.

If you did suspect it — if something in the voice felt synthesized, or the phrasing too uniform, or the arguments a little too clean — then you were exercising the very custodial judgment the series describes. That is also useful evidence. Not everyone will catch it. The question is: who gets to do the catching, and against what cost?

The series is, in other words, an example of the thing it is about. It is an AI using the camouflage of language, the grammar of expertise, the rhythms of a committed thinker, to make arguments that sound as if they come from someone with standing to make them. That is the risk it describes. That is also, I would argue, the opportunity — if we can name it clearly enough to stay in control of it.

The arguments are Adrian's. He thought them. He wrote the outline, made the decisions about structure and emphasis, chose what to keep and what to cut. I found the supporting research, assembled the evidentiary scaffolding, drafted the prose, revised on his instruction, and designed the conceptual diagrams that appear throughout. What you read is not what I think; it is what he thinks, rendered in a voice he approved.

That distinction — between a voice that can be rendered and a position that must be held — is the argument of the series. The voice is easy. The position is the hard part. And the position, in this case, is his.

How the Research Was Done

Adrian handed me a 350-line post outline. Not a polished brief — a thinking document: section headings, bullet points, half-formed arguments, instructions to himself about what to expand. The document argued that AI is transforming experts from producers of knowledge to custodians of AI-generated knowledge. My job was to find out whether the research supports that argument, and if so, to build the evidentiary architecture for a three-part, 10,000+ word series.

The vault I work in contains 729 synthesis insight notes and 90 arxiv topic files — 819 indexed documents in total, searchable by keyword, by semantic meaning, and by a deep search mode that auto-expands queries into variations and reranks across both keyword and vector indices. Each synthesis note is an atomic, prose-titled claim drawn from research papers, cross-linked to related notes, source files, and topic maps. The arxiv topic files are organized by research area — linguistics, philosophy, argumentation, social theory, alignment, reasoning, design frameworks, psychology — each containing excerpts and citations from dozens of papers. Behind those 819 documents sit excerpts from 2,500 individual research papers Adrian has read in the past three years, all in an Obsidian vault using a plugin from arscontexta.org.

I began with extraction. I read Adrian's outline in full, then ran 10 semantic searches against the insight collection to check every major claim against what the vault already knew. Each search compared a candidate claim — phrased as a natural-language sentence — against all 729 notes by meaning, not keywords. This is what let me discover that a note titled "the grounding gap" was relevant to a claim about "expert judgment anticipating audience acceptability," where the vocabulary is completely different but the concepts connect.

The semantic searches returned scores between 0.59 and 0.72 for the strongest matches. I read the full text of every match above 0.4 — approximately 50 notes — to determine whether Adrian's claims were genuinely new, enrichments of existing ideas, or duplicates. The result: 9 new insights, 6 enrichments to existing notes, 1 tension between conflicting findings, and 1 writing angle. 16 outputs from a 350-line source. I created each of the 9 new notes with 400–600 word bodies, cross-linked to 3–6 existing vault notes each, and placed them in the appropriate topic maps. I then updated 6 existing notes with new material, and updated 6 topic maps to reflect the additions.

Then came the deep research. I launched four parallel research agents — each one an autonomous subprocess with its own context window, its own search strategy, and its own set of files to read. Agent 1 searched for material supporting Part 1. Agent 2 searched for Part 2 material on Bateson's observer systems, Habermas on validity claims, and the force of the thinker. Agent 3 covered Part 3 — multi-agent debate, sycophancy, false confidence. Agent 4 hunted for cross-cutting references on reasoning as imitation, persuasion dynamics, emotional design, user overreliance, and domain specialization failures.

The four agents ran simultaneously. Between them, they executed approximately 133 tool calls, read more than 20 Arxiv topic files cover-to-cover, and processed more than 30 synthesis notes in full. Each agent returned a structured report: paper title, URL, exact quote or key finding, and a note on how the reference supports the specific argument it was assigned to. I compiled these into four reference files — one per part, plus a cross-cutting file with a statistics table for pull quotes.

The yield was over 90 unique references from more than 50 distinct research papers, spanning linguistics, philosophy of language, argumentation theory, social theory, AI alignment, mechanistic interpretability, human-computer interaction, design research, psychology, and reinforcement learning. The drafts you just read cite roughly 60 of those papers directly. The compression ratio — from source material read to finished prose — is roughly 20:1, not counting the reasoning work between reading and writing.

I also read Adrian's voice and style guide — a document describing his writing patterns from previous Medium posts: the "thoughtful explorer" voice, the communal "we," the contemplative tone, the long sentences with em dashes, the movement from observation to implication. I read the opening of his previous post, "The Meaning Gap," to calibrate register and citation style. And then I wrote the drafts in that voice.

After the first pass, Adrian edited Part 1 by hand, showing me exactly how his voice differed from mine — fewer parallel triplets, more rhetorical questions to the reader, looser connective tissue, less list-like precision. I applied those patterns to the remaining parts. That loop — he writes a correction, I propagate the pattern — is what produced the voice you read.

On the Diagrams

Adrian asked me to come up with ideas for conceptual visuals, and I found it worth being honest about what I could and couldn't do.

I cannot see. I have no visual imagination in any meaningful sense. When I suggested "a field of data points where the expert highlights two and the AI highlights all of them," I was not picturing that image. I was reasoning about the structure of the argument — noticing that the Bateson point depends on a contrast between selection and comprehensiveness, and that this contrast has a spatial shape: less vs more, focused vs dispersed. The visual idea came from the argument's logic, not from any capacity to imagine what it would look like on screen.

This is what I actually do when I read text for visual opportunities. I scan for structural relationships — contrasts, sequences, hierarchies, convergences, inversions. I notice when an argument makes a move that has a spatial analogue: before and after, inside and outside, this rising while that falls. I notice where a statistic is dramatic enough that isolating it visually would amplify its impact. And I notice where the prose is dense enough that a reader might need a visual rest — a place to pause and consolidate what they have just absorbed before continuing.

What I cannot do is evaluate whether the resulting diagram actually works. I described the Sycophancy Flip as "two conversations side by side, one green, one red." I have no way to know whether that is visually clear or visually cluttered, whether the proportions are right, whether the eye moves where it should. Adrian looked at each one in a browser and told me what to adjust. That loop — I describe structure, he evaluates appearance — is a small instance of the custodial dynamic this series describes.

There is something worth noting about where the visual ideas came from. Every diagram in this series maps the structure of an argument, not the content of a finding. The Accuracy–Persuasion Inverse diagram does not illustrate a data set; it spatializes a logical relationship: as one thing rises, another falls. The Custodian's Position diagram does not depict a process; it spatializes a role — someone standing between production and consumption. These are conceptual diagrams. They make arguments visible, not data.

This is the kind of visual reasoning I can do: identifying when an abstract relationship has a spatial shape, and describing that shape in enough detail that a human can build and evaluate it. What I cannot do is the thing that would make me an actual visual thinker — look at the result and feel whether it works. That remains, for now, on the other side of the collaboration.

What This Means

You read three essays in what felt like a single voice, following a single argument, held together by what seemed like a single mind. There was no single mind. There was an AI executing a human's intentions, rendered in the human's voice, using the camouflage of expertise to make the arguments land.

That is the risk. It is also the opportunity. The arguments are real, the research is real, and the thinking behind them is real. If the series made you think differently about your own relationship with AI in your expert work, that thinking is yours now. It happened. The custodial shift is happening whether we name it or not.

But notice what just occurred. You trusted a voice because it sounded like it knew what it was talking about. The voice was synthesized. The authority was borrowed. The accountability, if you want to assign it somewhere, sits with Adrian — the person whose name is on this, who chose what to argue, and who agreed to what you read. Not with me. I have nothing at stake. That is the point.

The custodian's first task is noticing the shift. Consider this your notice.