LLM Wiki Pattern — What It Means
LLM Wiki Pattern
TL;DR: Instead of AI retrieving from raw documents each time (RAG), the LLM builds and maintains a structured wiki that compounds knowledge over time. You curate sources and ask questions; the LLM handles all the bookkeeping.
Simple Explanation
Most AI document tools work like this: upload files, ask questions, AI retrieves relevant chunks, generates answer. Repeat. Nothing accumulates.
The LLM Wiki Pattern is different:
- You add a source to your collection
- The LLM reads it, extracts key information
- The LLM integrates it into an existing wiki — updating pages, adding cross-references, noting contradictions
- The knowledge is compiled once and kept current
- Every source makes the wiki richer
The wiki is the product, not chat history.
Think of it like the difference between:
- Googling the same thing repeatedly (RAG) vs.
- Building a personal encyclopedia that gets smarter (Wiki Pattern)
Why It Matters for Business
This pattern solves the “abandoned wiki” problem:
“Humans abandon wikis because the maintenance burden grows faster than the value. LLMs don’t get bored, don’t forget to update a cross-reference, and can touch 15 files in one pass.”
For business applications:
- Competitive intelligence — Wiki that stays current as you add sources
- Internal knowledge base — Fed by meetings, Slack, documents; maintained by AI
- Research projects — Deep dives that accumulate understanding
- Client knowledge — Everything you learn about a client, synthesized
The Three Layers
| Layer | What It Is | Who Owns It |
|---|---|---|
| Raw sources | Original documents, articles, data | Immutable — your source of truth |
| The wiki | Structured, interlinked markdown pages | LLM creates and maintains |
| The schema | Rules for how the wiki works | You and LLM co-evolve |
The Three Operations
1. Ingest
Add a source → LLM reads it → Discusses key points → Updates wiki pages → Logs activity
A single source might touch 10-15 wiki pages.
2. Query
Ask a question → LLM searches wiki → Synthesizes answer with citations → Good answers become new wiki pages
Queries compound back into the knowledge base.
3. Lint
Periodically health-check → Find contradictions, orphan pages, gaps → Suggest improvements
Keeps the wiki healthy as it grows.
The Key Insight
“Good answers can be filed back into the wiki as new pages. A comparison you asked for, an analysis, a connection you discovered — these are valuable and shouldn’t disappear into chat history.”
Your explorations compound just like ingested sources do.
Historical Connection
The pattern echoes Vannevar Bush’s Memex concept (1945):
- A personal, curated knowledge store
- Associative trails between documents
- Connections as valuable as documents themselves
Bush’s vision couldn’t solve who does the maintenance. LLMs solve that.
Real-World Example: This Wiki
This very wiki (Primores AI Wiki) implements the LLM Wiki Pattern:
raw/— Source documents (private)wiki/— LLM-maintained knowledge base (public)- methodology — How this wiki is built and maintained
When you’re reading this page, you’re seeing the pattern in action.
Common Misconceptions
-
❌ Myth: You need to write the wiki yourself
-
✅ Reality: LLM writes and maintains it; you curate sources and ask questions
-
❌ Myth: This requires complex infrastructure
-
✅ Reality: Just markdown files + an LLM agent (like Claude Code)
-
❌ Myth: This replaces RAG entirely
-
✅ Reality: Different tools for different jobs; wiki pattern is for compounding knowledge
Tools That Support This
- tools/obsidian — IDE for viewing/navigating the wiki
- Claude Code / Codex / similar — LLM agents that maintain the wiki
- qmd — Local search engine for larger wikis
- Git — Version history for free
Related Concepts
- glossary/rag — The retrieval approach this pattern improves upon
- glossary/prompt-engineering — Skills for guiding wiki maintenance
- glossary/llm — The technology that makes this possible
Key Takeaways
- Wiki pattern = compile knowledge once, keep it current
- LLM handles all the boring maintenance humans abandon
- Three layers: raw sources, wiki, schema
- Three operations: ingest, query, lint
- Good answers become wiki pages — everything compounds