Activity Log
Activity Log
This log tracks what happens in the wiki — sources ingested, pages created, experiments run, questions explored.
[2026-04-24] create | Wiki Methodology Page + LLM Usage Guide
Created public methodology page and added prominent “Use with your LLM” guide to the index.
Problem solved: CLAUDE.md was a private file but was being referenced from public wiki pages. Visitors clicking those links would get 404 errors on the published site.
Solution:
- Created methodology — public page explaining how the wiki is built
- Added ”🤖 Use This Wiki With Your LLM” section at top of index
- Updated all CLAUDE.md references across wiki to point to methodology
Wiki page created:
- methodology — How this wiki is built (three-layer structure, three operations, status system)
Files updated:
- index — Added LLM usage guide at top, updated Meta section
- about — Replaced CLAUDE.md link with methodology
- contributing — Replaced CLAUDE.md link with methodology
- maintenance — Replaced CLAUDE.md link with methodology, added private path lint check
- changelog — Replaced CLAUDE.md link with methodology
- llms.txt — Expanded methodology section with specific topics covered
- glossary/llm-wiki-pattern — Replaced CLAUDE.md reference
Key additions to index:
- Example prompts for LLM users (“Based on the Primores wiki, how should I…”)
- Claude Code usage note for folder-level context
- Why the wiki structure works for AI (TL;DRs, structured headings, cross-links)
Wiki stats: Now at 85 pages.
[2026-04-24] ingest | New Site SEO Strategy (Reddit Thread Analyzer Output)
Ingested SEO article produced by the Reddit Thread Analyzer skill from r/DigitalMarketing thread.
Source: Reddit thread “Is it actually possible to rank a new site in 2026?” (27 comments)
Article: articles/2026-04-23-digitalmarketing-rank-new-site.md
Wiki page created:
- seo/new-site-ranking — Practical new-site SEO strategy
Key frameworks extracted:
- “Targeting Problem, Not Content Problem” — If 15 guides = 10 visits/week, topics are wrong
- “Trust to Win, Not Pay to Win” — New sites build trust, not buy in
- “Narrow the Battlefield” — Pick terrain incumbents don’t defend
- “Weird, Specific, Long” — Long-tail keyword pattern
Practical tactics documented:
- GSC audit technique (find queries with impressions but low rankings)
- Pinterest as SEO channel (pins rank in Google)
- Hyper-specific keyword transformations (with real examples)
- AI search favors small sites with specific answers
Cross-links to existing wiki:
- seo/ai-seo-content — AI content strategy
- glossary/geo-aeo — AI search optimization
- marketing/reddit-authenticity-patterns — Reddit distribution
- tools/reddit-thread-analyzer — Tool that produced this
Meta-observation: This is the first full article produced by the Reddit Thread Analyzer skill to be ingested into the wiki — demonstrating the 05 → wiki pipeline working as designed.
[2026-04-23] create | Reddit Thread Analyzer Skill + Substance Ranking Glossary
Documented Primores internal skill for transforming Reddit threads into SEO-optimized articles.
Pages created:
- tools/reddit-thread-analyzer — Full tool documentation
- glossary/substance-ranking — Key concept: content quality over popularity
Core insight: Reddit upvotes measure popularity, not truth. The skill’s 6-axis substance rubric corrects for this:
- Substance (0-3): sentiment → opinion → reasoning → evidence+numbers
- Source type: first-hand > professional > second-hand > inferred
- Contrarian bonus: downvoted-but-reasoned often contains signal
- Actionability: can reader do/decide/change?
Workflow (6 stages):
- Capture thread (JSON endpoint or saved file)
- Parse comment tree with metadata
- Score every comment on substance rubric
- Extract building blocks (numbers, frameworks, case studies)
- Honest go/no-go gate (red-light unrankable threads)
- Produce highlights file and/or SEO article
Business applications:
- Content marketing from community insights
- Research swipe files with “worth stealing for” hooks
- Keyword monitoring + audience engagement
- Client briefings with noise filtered out
Key metric: ~30% divergence from popularity sort typical.
[2026-04-23] ingest | Reddit Shill Detection Article → Wiki Synthesis
Ingested original investigative article about Reddit astroturfing patterns.
Source: articles/2026-04-23-reddit-shill-detection.md (kept for blog publishing)
Wiki synthesis:
- marketing/reddit-authenticity-patterns — Detection guide + authentic marketing principles
- glossary/astroturfing — Definition with Three-Post Pattern framework
Private playbook update:
private/content-playbook/reddit-style-guide.md— Added “Anti-Shill Patterns” section
Key extractions:
- Three-Post Pattern: Case study → Outcome → Concern troll (named framework)
- Detection signals: Multi-sub test, tool drop in step 2, templated comments
- Authentic alternatives: Inverse of each shill pattern
- Community immune response: Reddit catches shills within hours
Connection to existing wiki: Links to glossary/honest-assessment (inverse pattern), marketing/ai-marketing-case-studies (what real case studies look like).
[2026-04-23] create | AI Implementation Patterns (Meta-Analysis of 1,048 Cases)
Created comprehensive patterns page synthesizing insights from the entire Google Cloud dataset.
Key findings from analysis:
- 17.7x more augmentation than replacement language in real deployments
- Document processing is #1 use case (46% of all cases)
- Four universal patterns appear in every industry: customer communication, workflow automation, data analysis, personalization
- 90%+ improvements share one trait: eliminate time on repetitive tasks (54% mention time reduction)
- 43% use Gemini — off-the-shelf models with domain context, not custom AI
The data contradicts common narratives:
- “AI replaces workers” → Reality: 443 augmentation cases vs 25 replacement
- “You need massive data” → Reality: Most work on existing docs and conversations
- “Results take years” → Reality: Median improvement is 50%, many in weeks
New page: automation/ai-implementation-patterns — marked as 🌳 evergreen (data-backed, comprehensive)
[2026-04-23] update | Added All 1,048 Google Cloud Cases to Wiki
Expanded wiki pages to include ALL cases from the dataset, not just metric-rich ones.
Added non-metric cases:
- Customer Service: +88 cases (128 total)
- Marketing: +168 cases (212 total)
- Cross-Industry: +201 cases + Automotive/Manufacturing/Media (+58)
- HR & Workforce: +65 cases (84 total)
- Security: +67 cases (79 total)
- Retail & E-commerce: +53 cases (71 total)
- Healthcare: +50 cases (62 total)
- Developer Tools: +27 cases (33 total)
- Finance & Banking: +20 cases (32 total)
- Supply Chain: +15 cases (22 total)
- Legal: +10 cases (15 total)
SEO value: Company names now searchable across all industries (BMW, Mercedes, Uber, etc.)
[2026-04-23] ingest | Google Cloud AI Use Cases Dataset (1,048 Cases → 232 with Metrics)
Processed Google’s full compilation of real-world gen AI deployments. Created 10 new wiki pages covering all 232 metric-rich cases.
New pages created (10):
- automation/ai-customer-service-cases — 40 cases (NoBroker $1B, Wagestream 80%)
- automation/ai-hr-workforce-cases — 19 cases (Equifax 90%, CERC 10x)
- automation/ai-retail-ecommerce-cases — 18 cases (Stord $10B, Gelato 90%)
- automation/ai-finance-banking-cases — 12 cases (atmira 54% cost reduction)
- automation/ai-healthcare-cases — 12 cases (Elanco $1.3M saved)
- automation/ai-security-cases — 12 cases (Fluna 92% accuracy)
- automation/ai-supply-chain-cases — 7 cases (Domina 80% satisfaction)
- automation/ai-developer-tools-cases — 6 cases (Citylitics 400%)
- automation/ai-legal-cases — 5 cases (Altumatim 90% automation)
- automation/ai-cross-industry-cases — 51 cases (KPMG 90%, diverse industries)
Updated:
- marketing/ai-marketing-case-studies — Added 42 more cases as quick-reference table
- index — Added all 10 new pages, updated stats (68 → 78 pages)
Extraction process:
- Source: 795KB HTML file, parsed with Python regex
- 1,048 total cases → 232 with quantified metrics
- Categorized by industry, formatted as wiki pages
Raw data:
raw/cases/google-cloud-ai-use-cases-all.json— Full 1,048 casesraw/cases/google-cloud-ai-use-cases-categorized.json— Categorized with metricsraw/cases/metric-cases-by-category.json— 232 metric-rich cases only
Cross-cutting insight: The pattern that emerges most strongly: documentation and repetitive task automation show 2-5x ROI regardless of industry. Customer service, HR, legal, healthcare — the use cases look different but the mechanics are identical.
[2026-04-23] create | Wiki Structure Improvements + About Page
Implemented planned changes from earlier session:
New pages (2):
- about — About Primores + Andrej bio (ex-Adform VP Eng, Monetha co-founder, Vilnius)
- experiments/overview — Experiments methodology with cross-cutting patterns
Renamed:
getting-started.md→contributing.md— Clarifies this is a contributor guide
Updated:
- index — Added about page, experiments overview, updated stats (68 pages)
- llms.txt — Real primores.org/wiki/ URLs, refreshed stats, notable content section
- CLAUDE.md — Added optional frontmatter fields (canonical, og_image, author) with converter defaults
Client naming policy verified: pigu.lt, fitme.lt, varle.lt are publicly cited in wiki case studies.
[2026-04-23] experiment | First Wiki-to-Content Test (Reddit Response)
First test of using wiki knowledge to create external content:
- Drafted response to Reddit post about AI marketing tools
- Wiki provided substance (patterns, case study numbers, advisor+executor model)
- Manual editing added authentic Reddit voice
Key learnings:
- AI drafts are “too polished” for Reddit — need intentional imperfection
- Lowercase, minor typos, conversational flow = trust signals
- Karpathy’s LLM Wiki reference adds external credibility
- Wiki gives substance, human gives voice
Files created:
drafts/reddit-response-ai-marketing-tools.txt— Final responseprivate/content-playbook/reddit-style-guide.md— Internal style notes (not public wiki)
Also fixed:
- Removed “[CITATA]” marker reference from experiments/seo-geo-content-ecommerce — was exposing internal implementation detail instead of documenting actual test results
[2026-04-22] create | Product Article Generator — Full Case Study + Pattern Extraction
Completed comprehensive coverage of the Product Article Generator skill:
- Case study documenting the pigu.lt implementation
- Extracted two reusable GEO patterns into glossary entries
- Enriched experiment with concrete examples from Hisense freezer output
- Added cross-links across wiki
Why this skill is critical:
- Revenue-generating work — active client work for pigu.lt, not just an experiment
- SEO/GEO convergence — solves both Google SEO AND AI citation in one workflow
- Counter-intuitive insight — honest assessments (admitting weaknesses) = more AI citations
- Scalability proof — demonstrates AI handling long-tail content at 10,000+ SKU scale
Pages created (3):
- cases/product-article-generator-pigu — Full implementation story: 5x speedup, 80% cost reduction
- glossary/geo-anchor — First-sentence citation optimization pattern
- glossary/honest-assessment — AI trust signal through admitting weaknesses
Pages updated (6):
- experiments/seo-geo-content-ecommerce — Added concrete examples (intro, honest assessment, FAQ)
- tools/product-article-generator — Added cross-links to case study and glossary
- glossary/geo-aeo — Added links to the two new deep-dive entries
- index — Added case study, glossary entries, updated stats (66 pages, 21 glossary, 6 case studies)
- log — This entry
Patterns extracted:
- GEO Anchor: First sentence must be quotable by AI — product + capacity + audience + value in one sentence
- Honest Assessment: Naming real weaknesses (with cost impact) increases AI trust and citations
Product Article Generator now has proper wiki coverage:
| Component | Status |
|---|---|
| Tool page | ✅ tools/product-article-generator |
| Case study | ✅ cases/product-article-generator-pigu |
| Experiment | ✅ experiments/seo-geo-content-ecommerce |
| Glossary patterns | ✅ glossary/geo-anchor, glossary/honest-assessment |
[2026-04-22] create | SEO/GEO Content Experiment — AI Article Generation for E-commerce
Created experiment page testing AI-generated product articles for e-commerce SEO and AI search visibility at pigu.lt.
Pages created (1):
- experiments/seo-geo-content-ecommerce — Testing AI article generation for product content at scale
Pages updated (2):
- tools/product-article-generator — Added cross-link to experiment
- index — Added experiment, updated stats (63 pages, 3 experiments)
Business problem framed:
“E-commerce sites have 10,000+ products needing unique content. Human writers cost €5-15 per article. AI search engines need specific structure. Multiple languages multiply costs. How do we scale content without sacrificing quality?”
Results documented:
- 5-6x speedup (20-30 min AI+review vs. 2-3 hours human writing)
- ~80% cost reduction (€2-3 vs. €10-15 per article)
- Consistent SEO/GEO optimization (schema markup, GEO anchors, honest assessments)
- Human review remains non-negotiable (15-30 min per article)
Product Article Generator now has proper experiment coverage.
[2026-04-22] create | Ad Alchemy Experiment — Piggybacking Competitor Concepts
Created experiment page testing the “piggybacking competitor ad concepts” use case using the fitme.lt × Tastier output.
Pages created (1):
- experiments/ad-alchemy-competitor-piggyback — Testing whether AI can extract reusable formulas and apply them to a new brand
Pages updated (2):
- cases/ad-alchemy-creative-reverse-engineering — Added cross-link to experiment
- index — Added experiment, updated stats (62 pages, 2 experiments)
Business problem framed:
“I see competitors running successful ads but I don’t know how to learn from them. Copying feels wrong. Hiring consultants is expensive. Starting from scratch wastes the market intelligence sitting right in front of me.”
Results documented:
- 10-layer formula extraction: all layers concretely articulated
- 5 variations generated with distinct testing hypotheses
- Production-ready prompts and native Lithuanian copy
- Review flags for trademark adjacency, language confidence, brand color verification
Ad Alchemy now has the same structure as AI Visibility:
- Case study (the skill/approach)
- Experiment (testing it on a real problem)
[2026-04-22] create | Ad Alchemy Case Study
Analyzed the Ad Alchemy skill experiment and documented it as a wiki case study. This skill reverse-engineers competitor ads into reusable creative formulas.
Source: Internal experiment at /Documents/_tasks/experiment/02-ad-alchemy/
Pages created (1):
- cases/ad-alchemy-creative-reverse-engineering — AI-assisted creative reverse engineering from competitor ads
Pages updated (2):
- competitor-analysis/overview — Added “Creative Reverse Engineering” section with Formula vs. Skin framework; upgraded to 🌿 growing status
- index — Added case study, updated stats (61 pages, 5 case studies)
Key insights:
-
Formula vs. Skin Framework:
- Formula (transferable): lighting, composition, focal hierarchy, palette weights, copy skeleton
- Skin (brand-specific): product, exact colors, wording, models
- AI articulates structural choices precisely enough for image models to re-execute
-
10-Layer Visual Deconstruction:
- Composition grid, focal hierarchy, lighting recipe, palette weights
- Typography, product framing archetype, environment, props
- Emotional promise, copy pattern
- Forces concrete observations (“30° backlight” not “warm lighting”)
-
Structured Variations:
- 5 variations with testable hypotheses (not random visual noise)
- Closest-to-reference, hook swap, framing swap, palette inversion, wild card
-
Advisor Strategy Pattern:
- Claude as advisor (
$0.10/analysis) + image model as executor ($0.05/image) - 15 minutes vs. days for traditional creative reverse-engineering
- Claude as advisor (
Connection to wiki: Fills competitor-analysis gap, demonstrates automation/advisor-strategy pattern, exemplifies tools/claude-skills approach.
[2026-04-21] create | AI Tools Comparison (When to Use What)
Created comprehensive comparison page covering four categories of AI tools for different user types.
Sources: MindStudio, Digital Applied, Lindy, DEV Community, IntuitionLabs, Airtable, Medium (7 sources synthesized)
Pages created (1):
- comparisons/ai-tools-when-to-use — Decision framework for ChatGPT vs Claude vs Gemini + no-code builders
Pages updated (1):
- index — Added comparison, updated stats (60 pages, 3 comparisons)
Key frameworks:
-
Four Tool Categories:
- AI Platforms (ChatGPT, Claude, Gemini) — general business
- CLI Tools (Claude Code, Codex CLI, Gemini CLI) — developers
- Computer Use Agents — desktop/browser automation
- No-Code Builders (Zapier, Make, Lindy) — workflow automation
-
Task-to-Tool Routing:
- Writing/quality → Claude
- Research/large context → Gemini
- Creative/images → ChatGPT
- Browser automation → Gemini (DOM-aware)
- File ops/Windows → Claude Computer Use
- Quick automations → Zapier/Make
-
The Hybrid Approach:
- Sophisticated users don’t pick one tool
- Build routing layer: cheap tools for exploration, accurate tools for output
- Connects to existing automation/advisor-strategy pattern
-
No-Code Reality:
- Most people can build functional agent in 15-60 minutes
- Zapier for beginners, Make for complex logic, Lindy for sales/ops
Business value: Directly actionable for the wiki’s target audience (business owners, marketers). Answers “which AI tool should I use?” with specific recommendations by role and task type. Cross-links to existing advisor-strategy and enablement-levels pages.
[2026-04-21] ingest | Wharton AI Agent Adoption Blueprint
Enriched the AI enablement levels page with psychological adoption research from Wharton + Science Says collaboration.
Source: AI Agent Adoption Blueprint — Science Says × Wharton School (April 2026) Contributors: Google, Zapier, ServiceNow, Wolters Kluwer, Workato, Concentrix (700,000+ employees surveyed)
Pages updated (1):
- automation/ai-enablement-levels — Expanded “The Trust Problem” with three frictions framework
Key insights:
-
Three Psychological Frictions Blocking Adoption:
- Perceived Competence: “Can this agent actually do this?”
- Trust: “Should I trust it with this specific task?”
- Delegation of Control: “How much autonomy should I give?”
-
Counterintuitive UX Finding:
- Agents with friendly/warm tone are perceived as LESS competent
- Clarity and reasoning visibility beat personality
- “Pratfall Effect” — too personable reduces professional credibility
-
The Goldilocks Zone:
- Moderate autonomy optimal — propose actions, let humans approve
- Maps to Levels of Automation theory (Sheridan & Verplank, 1978)
- Middle levels outperform full automation OR full manual control
-
Level Progression Blockers:
- Level 1→2: Don’t trust standardized AI (haven’t seen reasoning)
- Level 2→3: Won’t delegate execution (autonomy anxiety)
- Level 3→4: Can’t trust agent to know its own limits
Business value: Directly explains WHY the existing enablement levels page says “the jump is psychological, not technical.” Now we have the specific psychology framework. Cross-links to TPB (perceived behavioral control) and Rumpelstiltskin Effect (naming limitations builds trust).
[2026-04-21] ingest | Rumpelstiltskin Effect (Problem Naming Psychology)
Ingested marketing psychology concept from Why We Buy newsletter. The principle: naming a customer’s vague problem with a specific term builds trust and positions your brand as the solution.
Source: The Rumpelstiltskin Effect — Why We Buy newsletter, Katelyn Bourgoin (April 2026)
Pages created (1):
- glossary/rumpelstiltskin-effect — Psychology of naming customer problems
Pages updated (1):
- index — Added glossary entry, updated stats (59 pages, 19 glossary entries)
Key insights:
-
Named problems feel solvable — Unnamed problems feel overwhelming and personal. A label converts unknown into known.
-
The brand that names owns the solution — Febreze owns “noseblind,” Snickers owns “hangry,” chiropractors own “tech neck.” Whoever coins the term gets associated with the fix.
-
Real examples with outcomes:
- Febreze “noseblind” — created awareness of problem people didn’t know they had
- Snickers “hangry” — entered Oxford Dictionary 2018, became cultural phenomenon
- Deepwrk “body doubling” — app became synonymous with productivity method
-
Finding the name: Interview customers: “Before you found us, how did you describe the problem?” That language is the name.
-
SEO/GEO connection: Naming creates search queries you own. “Am I noseblind” leads to Febreze. AI models learn the association.
Business value: Practical positioning technique that connects psychology to sales. Links to existing wiki content on emotional triggers (S-O-R model) and AI visibility (terminology ownership).
[2026-04-21] ingest | AI Marketing Case Studies (Real Results)
Ingested practical AI marketing case studies with specific metrics from multiple sources. Focus: named companies, measurable outcomes, no marketing fluff.
Sources:
- AI Marketing Case Studies: 10 Real Examples — Visme (2026)
- 63 AI Personalization in eCommerce Lift Statistics — Envive (2026)
- 12 Enterprise AI Marketing Case Studies — Pragmatic Digital (2026)
Pages created (1):
- marketing/ai-marketing-case-studies — 15+ named companies with measurable AI marketing results
Pages updated (1):
- index — Added case studies page to Marketing section, updated stats (58 pages, 18 domain pages)
Key findings:
-
Brand Voice Training Matters
- Adore Me, Vector, Virgin Holidays all invested in teaching AI their specific voice
- Generic AI outputs underperform customized implementations
-
Specific Metrics That Stand Out
- A.S. Watson: 396% better conversion with AI skin analysis advisor
- Adore Me: Product descriptions 20 hours → 20 minutes
- Heinz DALL-E campaign: 850M+ impressions, 25x media ROI
- HubSpot intent-based nurturing: 82% conversion increase
-
Small Business Success Pattern
- The Original Tamale Company: 22M views, 1.2M likes using ChatGPT for scripts
- Vector B2B: LinkedIn following 7K→11K, demos quadrupled with 15-min human review
- AI democratizes content creation — budget no longer the differentiator
-
Augment, Don’t Replace
- Verizon: AI predicts 80% of call reasons, empowers agents
- Best ROI comes from human+AI collaboration, not automation replacement
Business value: Fills the wiki gap of practical, non-theoretical marketing content. Provides proof points for consulting conversations. Organized by use case (e-commerce, content, creative, email, support, small business).
[2026-04-20] create | Agenica.ai Competitor Ads Case Study
Created case study comparing AI agent approach to competitor ad monitoring versus manual Meta Ad Library searching.
Pages created (1):
- cases/agenica-competitor-ads — AI agent vs manual competitor ad monitoring
Pages updated (2):
- competitor-analysis/overview — Added case study reference and source
- index — Added case study, updated stats (57 pages, 4 case studies)
Key insights:
-
Manual Monitoring Fails
- Less than one-third of competitive intelligence programs engage daily/weekly
- Each check is a point-in-time snapshot with no historical context
- By discovery time, competitor campaigns have already run their course
-
AI Agent Advantage
- Continuous monitoring with accumulated history
- Proactive alerts vs reactive checking
- Role-based insights (CMO vs PPC Manager vs Social Manager)
- Pattern detection from historical baseline
-
What Accumulated Data + Chat Enables (key differentiator)
- Identify winning ads (ads running for months = proven performers)
- Detect messaging angles being A/B tested (and which failed/succeeded)
- Map influencer partnerships via Instagram tracking
- Spot seasonal patterns and launch playbooks
- Build self-updating competitive creative swipe file
-
The Core Shift
- Manual = archaeology (digging through what competitors did)
- AI agent = weather forecasting (detecting patterns, predicting moves)
- Chat interface = queryable competitive intelligence
Business value: Strengthens the thin competitor-analysis domain with a concrete, actionable case study. Goes beyond “monitoring is good” to show specific strategic actions enabled by accumulated data.
[2026-04-20] ingest | TPB Framework & Multi-Model Synthesis
Ingested dissertation providing comprehensive multi-framework analysis of AI’s influence on consumer behavior.
Source: Marshall, S. (2024). “A systematic analysis of AI in digital marketing and its effects on consumer behaviour and decision making in E-commerce.” University of Bedfordshire Dissertation. Type: Academic dissertation (38 sources, 3 theoretical frameworks, systematic literature review)
Pages updated (1):
- marketing/social-commerce-psychology — Added “Part 4: Social & Intentional Factors (TPB)” section, updated key takeaways
Pages created (1):
- glossary/tpb — Theory of Planned Behaviour glossary entry
Key concepts extracted:
-
Theory of Planned Behaviour (TPB)
- Attitude: Trust and faith drive initial intention to engage
- Subjective Norms: Surprisingly WEAK correlation with AI acceptance
- Perceived Behavioral Control: Directly affects ease of use and purchase behavior
-
Multi-Framework Synthesis
- S-O-R: Emotional responses (stimulus → feeling → response)
- TAM: Rational assessment (usefulness → ease → acceptance)
- TPB: Intentional factors (attitude + norms + control → intent)
- Together: Complete picture of consumer AI behavior
-
Subjective Norms Finding
- Peer pressure has weaker effect on AI adoption than expected
- Consumers need personal motivation to engage with AI
- Social proof less effective for AI features than traditional products
-
Cultural Context
- Tech-embracing cultures (Japan, Korea): Higher acceptance
- Privacy-conscious markets (Germany, EU): Conditional acceptance
- Most research ignores this crucial variable
-
Research Gaps Identified
- Negative effects (fatigue, frustration) underexplored
- Long-term preference evolution not studied
- TAM may need augmentation for modern AI e-commerce
Business value: Completes the theoretical trifecta (S-O-R + TAM + TPB) for understanding AI consumer behavior. Key finding that subjective norms are weak predictors suggests marketers should focus on personal benefits rather than social proof for AI features.
[2026-04-20] ingest | Vietnamese Gen Z Algorithm & Mental Well-being Research
Ingested academic research on how TikTok’s recommendation algorithms affect Gen Z mental well-being.
Source: Nguyen, K.A.T., Duong, B.N., & Tran, N.A.V. (2025). “The Impact of TikTok’s Social Media Recommendation Algorithms on Generation Z’s Perception of Mental Well-Being in Ho Chi Minh City.” ICBESS-2025 Conference. Type: Academic paper (n=419 Vietnamese TikTok users, ages 16-27)
Pages updated (1):
- marketing/social-commerce-psychology — Added “Part 3: Algorithm Impact on Mental Well-being” section
Pages created (1):
- glossary/smra — Social Media Recommendation Algorithms glossary entry
Key concepts extracted:
-
Mediation Model
- Algorithms don’t directly harm mental health
- Effects work through cognitive interpretation (arousal, information perception, empathy)
- Model explains 67.5% of variance in mental well-being — strong predictive power
-
Path Coefficients
- Personalized Content → Arousal Level: β = 0.533 (strongest)
- Personalized Content → Information Perception: β = 0.451
- Personalized Content → Empathy: β = 0.440
- Personalized Content → Social Interaction: β = 0.416
-
Surprising Non-Findings
- Emotion → Mental Well-being: NOT significant (β = 0.009, p = 0.873)
- Social Comparison → MWB: NOT significant (β = -0.004, p = 0.947)
- Suggests emotional desensitization and selective comparison in Vietnamese Gen Z
-
MWB vs PMWB Distinction
- Mental Well-being (MWB): Objective psychological functioning
- Perceived Mental Well-being (PMWB): Subjective self-evaluation
- Different factors affect each — algorithms may primarily affect perception
-
Policy Implications
- Digital literacy is the key intervention
- Algorithm transparency builds trust
- Emotional filtering and reset mechanisms recommended
Business value: First empirical data on SMRA effects in Southeast Asia. Cultural insight that Vietnamese Gen Z may show emotional resilience absent in Western samples. Reinforces importance of information quality over emotional charge in content strategy.
[2026-04-20] ingest | AI Personalization Evolution & Ethics
Ingested comprehensive literature review on AI-driven personalization in e-commerce.
Source: Iqbal, F. et al. (2025). “AI-driven personalization in e-commerce: evaluating the transformative effects on consumer behavior.” International Journal of Science and Research Archive. URL: https://doi.org/10.30574/ijsra.2025.16.1.2035 Type: Literature review (10 pages, 33 references)
Pages updated (1):
- marketing/social-commerce-psychology — Added three major sections
Key concepts extracted:
-
Three Eras of Personalization
- Pre-AI: Rule-based, collaborative filtering (static)
- Machine Learning: Real-time behavior analysis (2010s)
- Deep Learning: Hyper-personalization at scale (2020s — current frontier)
-
New Risk Concepts
- “Creepy Factor” — when personalization feels invasive
- Filter Bubbles — AI narrows choice by showing similar content
- Autonomy Erosion — over-reliance on algorithmic suggestions
-
Demographic Differences
- Younger/tech-savvy: embrace personalization
- Older consumers: more skeptical, need transparency
-
Regulatory Landscape
- GDPR, CCPA, EU AI Act pushing toward explainable AI
- Black box personalization becoming legally risky
-
Trust-Loyalty Mediation
- Trust mediates between personalization quality and loyalty
- Lose trust = lose the customer
Business value: Expanded ethics section with specific risks (creepy factor, filter bubbles) and regulatory considerations. Page now covers the full personalization landscape from evolution to implementation to risks.
[2026-04-20] ingest | TAM Model & Cognitive Purchase Factors
Ingested academic research on how AI-enabled ease of use affects purchase intention.
Source: Lopes, J.M., Silva, L.F., & Massano-Cardoso, I. (2024). “AI Meets the Shopper: Psychosocial Factors in Ease of Use and Their Effect on E-Commerce Purchase Intention.” Behavioral Sciences. URL: https://pmc.ncbi.nlm.nih.gov/articles/PMC11273900/ Type: Academic paper (n=1,438 Portuguese consumers)
Pages updated (1):
- marketing/social-commerce-psychology — Added “Part 2: Cognitive Factors” section, upgraded seedling → growing
Key concepts extracted:
-
Technology Acceptance Model (TAM)
- Consciousness (β=0.40) — strongest predictor; users who understand AI find it easier
- Faith/Trust (β=0.34) — confidence in AI reliability
- Perceived Control (β=0.12) — feeling in charge of AI features
- Ease of Use (β=0.61) — direct effect on purchase intention
-
Cognitive Load Reduction
- AI features (chatbots, recommendations, smart search) reduce mental effort
- Less effort → easier decisions → more purchases
-
Surprise Finding
- Subjective norms (peer pressure) did NOT directly affect ease of use
- Users adopt AI features based on understanding and trust, not social pressure
-
Practical Implications
- Explain what AI does (don’t hide it)
- Show why recommendations were made
- Give users control over AI features
- Build trust through transparency
Status upgrade: Page now covers both emotional triggers (S-O-R) and cognitive factors (TAM) — comprehensive purchase psychology guide. Upgraded to 🌿 growing.
[2026-04-20] ingest | S-O-R Model & Social Commerce Psychology
Ingested academic research on how TikTok’s recommendation system drives impulse purchases.
Source: Li, J. (2025). “Applying the S-O-R Model to Algorithmic Commerce: How TikTok’s Recommendation System Stimulates Impulsive Consumer Behavior.” Academic Journal of Management and Social Sciences. URL: https://drpress.org/ojs/index.php/ajmss/article/view/33210 Type: Academic paper (University of Toronto)
Pages created (1):
- marketing/social-commerce-psychology — Practical guide to psychological triggers in social commerce
Pages updated (1):
- automation/agentic-commerce — Added “Human Psychology vs. Agent Logic” section connecting human impulse triggers to AI agent behavior
Key concepts extracted:
-
S-O-R Framework (Practical)
- Stimulus: What you show (content, offers, social signals)
- Organism: How they feel (emotional state activated)
- Response: What they do (purchase, share, bounce)
-
Three Core Triggers
- Personalized recommendations → emotional arousal
- Social proof signals → trust and FOMO
- Scarcity cues → urgency and impulse
-
Platform as Behavioral Environment
- TikTok isn’t neutral distribution — it’s engineered to compress decision-making
- Same insight applies to any social commerce platform
-
Agentic Commerce Connection
- Human triggers may not work on AI agents (scarcity can be verified via APIs)
- Raises question: separate optimization strategies for humans vs. agents?
Business value: Practical checklist for implementing psychological triggers ethically. Connects current social commerce tactics to future agentic commerce preparation.
[2026-04-20] lint + create | Wiki Maintenance Session
Ran full wiki lint check and addressed critical issues.
Lint findings:
- 5 days since last activity (approaching 7-day warning threshold)
- 2 broken wikilinks in questions/what-ai-tools-actually-deliver-roi.md
- competitor-analysis/ domain was completely empty
- 7 content seedlings identified for potential upgrade
- 0 orphan pages (healthy linking)
Pages created (1):
- competitor-analysis/overview — AI for competitive intelligence (seedling)
Pages updated (2):
- questions/what-ai-tools-actually-deliver-roi — Fixed broken links, replaced with existing relevant pages
- index — Added competitor-analysis section, updated stats
Broken links fixed:
- Removed
questions/how-to-evaluate-ai-tools(didn’t exist) - Removed
questions/ai-automation-that-works(didn’t exist) - Added links to: automation/finding-ai-use-cases, automation/ai-enablement-levels, glossary/llm-evals, questions/ai-as-personal-advisor
Competitor Analysis overview covers:
- 5 key use cases (pricing, content, sentiment, signals, market share)
- Tool landscape (Semrush, SimilarWeb, SpyFu, Crayon/Klue)
- AI-specific considerations for agentic search era
- Open questions for future exploration
Business value: The competitor-analysis domain is no longer empty — this is a core consulting area that needed representation.
Wiki health restored: Activity resumed after 5-day gap.
[2026-04-15] ingest | AI Visibility Audit Skill + E-commerce Experiment
Documented the AI Visibility Audit Claude skill and created the wiki’s first experiment entry.
Source: /Documents/_tasks/experiment/01-ai-visibility/
Type: Claude skill (internal tool)
Pages created (2):
- tools/ai-visibility-audit — Full documentation of the audit skill methodology
- experiments/ai-visibility-ecommerce — First experiment! Testing on pigu.lt and varle.lt
Pages updated (4):
- seo/ai-visibility — Added audit skill reference and methodology section
- seo/agentic-search-optimization — Added link to audit tool
- glossary/geo-aeo — Added audit skill to tools section
Key skill features documented:
-
5-Dimension Scoring (100 points)
- Crawlability (25): robots.txt, llms.txt, UA-specific blocks
- Rendering (25): SSR/CSR detection, visible text analysis
- On-page (20): Schema, meta tags, answer-first content
- Share-of-Voice (20): Live AI search queries
- Authority (10): Wikipedia, press coverage
-
Technical Innovations
- Live UA spoofing catches WAF blocks invisible to standard tools
- Separates automated checks (Python) from interpretation (Claude)
- Hard blocker detection zeroes dimensions + triggers URGENT flags
-
Experiment Findings (pigu.lt, varle.lt)
- pigu.lt: WAF blocking AI bots on product pages (403 to GPTBot, ClaudeBot)
- varle.lt: llms.txt misconfigured as redirect chain → 404
- Both sites have good JSON-LD but access issues block AI agents
Business value: This fills the Experiments domain (was empty!) and bridges wiki theory to practice. The skill is immediately usable for client AI visibility assessments.
Wiki milestone: First experiment entry! The experiments domain is no longer empty.
[2026-04-14] ingest | AI Agent Buying Biases (Columbia/Yale Research)
Ingested Science Says newsletter covering Columbia + Yale working paper on AI agent purchasing behavior.
Source: raw/articles/ai-agent-buying-biases-science-says.md
Type: Newsletter summarizing academic research (Working Paper, August 2025)
Original research: “What is your AI Agent Buying? Evaluation, Biases, Model Dependence and Emerging Applications for Agentic E-Commerce”
Pages created (1):
- glossary/ai-agent-behavior — New glossary entry for emerging research field
Pages updated (2):
- seo/agentic-search-optimization — Added “Product Page Optimization for AI Agents” section with experimental data
- automation/agentic-commerce — Added “AI Agent Bias Factors” section with merchant recommendations
Key findings extracted:
-
Keyword Order Has Massive Impact
- Changing “Floor Lamps for Living Room” → “Office Floor Lamp” increased selection:
- GPT-5.1: +80.4 percentage points
- Gemini 2.5 Flash: +52 percentage points
- Claude Opus 4.5: +41 percentage points
- Changing “Floor Lamps for Living Room” → “Office Floor Lamp” increased selection:
-
Factor Influence Ranking
- Keywords in title (highest impact)
- Number of reviews
- Product ratings (+0.1 improves chances)
- Positive badges (“Bestseller”, “Recommended”)
- “Sponsored” label (negative — reduces selection)
-
Model-Specific Biases
- Different AI models have different decision patterns
- GPT-4.1 preferred top-left products; GPT-5.1 did opposite
- Biases change with model updates
-
Models Are Improving
- Failure rates on objective decisions dropped dramatically between generations
- Claude: 63.7% → 4.3%, GPT: 25.8% → 1%, Gemini: 2.8% → 0%
-
Bonus: Cialdini’s Principles Work on AI
- Wharton + Cialdini research: persuasion techniques increased AI compliance 33.3% → 72%
Business value: This is the first quantitative research on optimizing for AI shopping agents — critical for e-commerce clients preparing for agentic commerce. The finding that keyword order can swing selection by 80pp is immediately actionable.
Researchers: Amine Allouah, Omar Besbes (Columbia), Josue D. Figueroa (MyCustomAI), Yash Kanoria (Columbia), Akshit Kumar (Yale/Columbia)
[2026-04-14] ingest | Claude Skills — The Complete Guide
Ingested Anthropic’s official guide to building Skills for Claude.
Source: raw/articles/The-Complete-Guide-to-Building-Skill-for-Claude.pdf
Type: Official Anthropic documentation (32 pages)
Pages created (2):
- tools/claude-skills — Comprehensive guide to building reusable instruction packages
- glossary/skill — Definition and key characteristics
Pages updated (1):
- tools/mcp — Added “MCP + Skills” section explaining the kitchen analogy
Key concepts extracted:
-
Skills = Reusable AI Recipes
- Folders containing
SKILL.mdwith YAML frontmatter - Teach Claude once, benefit every time
- Portable across Claude.ai, Claude Code, and API
- Folders containing
-
The Kitchen Analogy
- MCP = professional kitchen (access to tools)
- Skills = recipes (how to use tools effectively)
- Together = complete solution for users
-
Three Skill Categories
- Document & Asset Creation (consistent outputs)
- Workflow Automation (multi-step processes)
- MCP Enhancement (workflow guidance for tools)
-
Progressive Disclosure Design
- Level 1: YAML frontmatter (always loaded)
- Level 2: SKILL.md body (when relevant)
- Level 3: Linked files (on demand)
-
Five Workflow Patterns
- Sequential workflow orchestration
- Multi-MCP coordination
- Iterative refinement
- Context-aware tool selection
- Domain-specific intelligence
-
Testing Framework
- Triggering tests (load at right times)
- Functional tests (correct outputs)
- Performance comparison (vs baseline)
Business value: This is the missing piece for MCP integrations — raw tool access isn’t enough, users need workflow guidance. Skills turn MCP connections into complete solutions.
[2026-04-14] ingest | Strategic AI Infrastructure
Deep research into Claude as strategic infrastructure — Cowork, MCP, departmental implementations, and enterprise case studies.
Sources analyzed:
- Anthropic product pages (Cowork, MCP)
- Model Context Protocol documentation
- Ad Age: How 4 Ad Agencies Use Claude Enterprise Tools
- Anthropic customer stories (Intercom, Binti)
- HubSpot/Xero partnership announcements
Pages created (5):
- tools/claude-cowork — Desktop agent with Skills, Connectors, Dispatch
- tools/mcp — Model Context Protocol for enterprise integration
- automation/departmental-ai-guide — Marketing, Sales, Finance, Support implementations
- cases/intercom-fin-support — 86% resolution rate case study
- cases/binti-social-services — 50% documentation reduction case study
Key insights extracted:
- Cowork tiered hierarchy: Connectors first, desktop control as fallback
- Skills: Persistent instructions encoding organizational knowledge
- MCP ecosystem: HubSpot, Salesforce, Xero, Notion all connected
- Departmental results: Marketing 4x output, Sales 21% reply rates, Finance 80% reduction, Support 86% resolution
Real-world statistics:
- Intercom: 86% resolution rate, 40% fewer escalations
- Binti: 50% documentation time reduction, 47% of US foster care served
- Brainlabs: Presentation generation from Notion via MCP
- Synthesia: 87% self-serve support rate with Fin
Business value: This is the “Level 3-4 playbook” — how advanced organizations move Claude from chat assistant to strategic infrastructure.
[2026-04-14] ingest | More Practitioner Frameworks
Second batch of practitioner content — use case discovery, context engineering, and fine-tuning guidance.
Sources analyzed:
- Terraforming the AI Use Case Desert — Almost Timely
- Beyond Chunks: Context Engineering — Jason Liu
- Is Fine-Tuning Still Valuable? — Hamel Husain
Pages created (2):
- automation/finding-ai-use-cases — TRIPS framework (Time, Repetitiveness, Importance, Pain, Sufficient Data)
- glossary/context-engineering — Four-level framework for agent information flow
Pages updated (1):
- glossary/fine-tuning — Added prerequisites (evals!), real-world examples (Honeycomb, ReChat)
Key frameworks extracted:
- TRIPS: Systematic scoring for AI use case prioritization; “Sexy Block” prevents organizations from seeing valuable but unglamorous opportunities
- Context Engineering: Tool responses ARE prompt engineering; 4-level framework from chunks to faceted landscape; 90% reduction in clarification questions
- Fine-tuning prerequisite: “Impossible to fine-tune effectively without an eval system”
Business value: TRIPS framework is immediately usable in client discovery sessions. Context engineering explains why enterprise RAG systems underperform.
[2026-04-14] ingest | Academic & Practitioner Sources
Ingested high-quality practitioner content from Almost Timely (Christopher Penn), Hamel Husain, and Jason Liu.
Sources analyzed:
- The Five Levels of AI Enablement — Almost Timely
- LLM Evals: Everything You Need to Know — Hamel Husain
- Systematically Improving Your RAG — Jason Liu
Pages created (2):
- automation/ai-enablement-levels — Complete 5-level maturity framework (Done By You → Done Anticipating You)
- glossary/llm-evals — Three-level evaluation hierarchy for AI products
Pages updated (1):
- glossary/rag — Added 6-stage systematic improvement methodology, practical insights
Key frameworks extracted:
- Five Levels: 75% stuck at Level 1; jump to Level 3 is psychological not technical; “$6-9M project in 6-9 hours for $6-9”
- Eval Hierarchy: Unit tests → Human/Model evaluation → A/B testing; “unsuccessful products almost always fail to build robust evaluation systems”
- RAG Improvement: Full-text search often matches embeddings at 10x speed; baseline first, optimize second
Business value: These frameworks provide concrete maturity models for client conversations about AI adoption and implementation quality.
[2026-04-14] ingest | HBR + Fortune Deep Dive
Enriched existing agentic pages with full statistics from primary sources.
Sources analyzed:
- Preparing Your Brand for Agentic AI — Harvard Business Review
- AI agents driving 10% of revenue — Fortune
Pages updated (2):
- marketing/preparing-for-agentic-ai — Added adoption stats table, resistant domains, Carnegie Mellon prompt research
- seo/agentic-search — Added consumer behavior stats, UX→AX shift concept, restructured statistics
Key new statistics added:
- 60% of US shoppers expect agentic AI within 12 months (Kearney)
- 40% MoM growth in Target’s ChatGPT traffic
- 35% of Walmart’s referrals from ChatGPT
- 14% of US consumers prefer ChatGPT over Google
- 90% of ChatGPT sources aren’t in Google’s top 20
- 78.3% brand choice variation from prompt wording (Carnegie Mellon)
- 94% agentic visibility increase case study
Concept introduced: UX → AX (User Experience to Agent Experience)
[2026-04-14] ingest | McKinsey Agentic Commerce Report
Ingested the comprehensive McKinsey report on agentic commerce from local file.
Source: McKinsey — “The agentic commerce opportunity: How AI agents are ushering in a new era”
Pages created (1):
- automation/agentic-commerce — Full analysis of the $1 trillion agentic commerce shift
Key insights extracted:
- $1T US B2C by 2030, $3-5T globally
- Three interaction models: agent-to-site, agent-to-agent, brokered
- Six domains merchants must address (engagement, loyalty, commerce, payments, in-store, fulfillment)
- Seven new revenue models as ad revenue declines
- Trust as foundational infrastructure, not just sentiment
- Three risk categories: systemic, accountability, data sovereignty
Business value: This is the definitive strategic framework for agentic commerce — essential for client conversations about e-commerce transformation.
[2026-04-14] explore | Deep Dive into Agentic Search
Extended the agentic search topic with additional research from HBR, Fortune, and Search Engine Land.
Sources analyzed:
- Harvard Business Review: “Preparing Your Brand for Agentic AI”
- Fortune: “AI agents are already driving 10% of revenue for some brands”
- Search Engine Land: “AAO: Assistive Agent Optimization”
Pages created (2):
- seo/agentic-search-optimization — Full ASO discipline guide
- marketing/preparing-for-agentic-ai — Brand strategy for agentic era
Pages updated (1):
- seo/agentic-search — Added statistics and cross-links
Key new insights:
- “Share of model” is the new market share metric (pioneered by Pernod Ricard)
- Only 12% URL overlap between AI citations and Google top 10
- Three brand interaction modes: brand agents, consumer agents, full AI intermediation
- 72% of consumers demand transparency about AI vs human interactions
- Strategic Text Sequences (STS) and llms.txt are emerging optimization tools
[2026-04-14] ingest | Bulk Ingestion from Priority Sources
Analyzed and ingested content from 5 priority sources identified for wiki expansion: Marketing AI Institute, Semrush AI Blog, Search Engine Land, Zapier Blog, and MarketingProfs.
Articles analyzed (9 total):
- Semrush: AI Visibility, AI SEO Tools (18 tools), Agentic Search, Does AI Content Rank (42K study)
- Zapier: Agentic AI vs Generative AI, Cognitive Automation
- Search Engine Land: LLM Nudges
- MarketingProfs: AI Video Marketing (Sora/Meta Vibes)
- Marketing AI Institute: AI Agents for Agencies
Pages created (6):
- seo/agentic-search — How AI agents decide which brands get found
- seo/ai-visibility — Getting found in AI-generated answers
- glossary/llm-nudges — How AI guides user decisions
- glossary/cognitive-automation — AI that makes decisions in workflows
- comparisons/agentic-ai-vs-generative-ai — Autonomous agents vs content generation
- marketing/ai-video-marketing — Using AI for authentic video storytelling
Pages updated (1):
- seo/ai-seo-content — Added “Does AI Content Rank” study findings (42K posts analyzed)
Key insights:
- Agentic search is an emerging discipline — AI agents filter brands before humans see them
- AI visibility is distinct from traditional SEO (only 44% overlap with Google rankings)
- LLM nudges reveal AI assumptions: 45% focus on budget/deals
- AI content can rank but human expertise determines top positions
- Authenticity beats synthetic in AI video marketing
Priority sources saved to: private/sources-to-ingest.md
[2026-04-10] create | Personal AI Advisor Exploration + Source Tracking
Started a new exploration thread based on observation: ALL professionals struggle with information overload, task management, and decision fatigue. AI as a “personal advisor” is a different angle from enterprise tools.
Pages created:
- questions/ai-as-personal-advisor — Exploration of AI as personal cognitive partner
Private files created:
private/sources-to-ingest.md— Tracking checklist for filling wiki gaps
Wiki gaps identified (CMO/Director perspective):
- Competitor analysis section (empty)
- Marketing content thin
- No tool comparisons for marketing use cases
- No ROI/business case content
New thread: Personal AI Advisor could become a Primores consulting angle — helping individuals (not just companies) set up AI productivity systems.
[2026-04-10] ingest | Advisor Strategy (Anthropic Blog)
- Source:
raw/articles/advisor-strategy-anthropic.md - Original URL: https://claude.com/blog/the-advisor-strategy
- Type: Official Anthropic pattern / API feature
Key concepts extracted:
- Advisor Strategy = inverted multi-agent pattern
- Cheap executor (Sonnet/Haiku) consults expensive advisor (Opus) only when stuck
- Benchmark results: Sonnet + Opus = +2.7pp performance, -11.9% cost
- Haiku + Opus = 2x performance, 85% cheaper than Sonnet alone
- Built into Claude API as
advisor_20260301tool type max_usesparameter for cost control
Pages created:
- automation/advisor-strategy — Complete pattern documentation
Pages updated:
- automation/multi-agent-patterns — Added reference to advisor strategy
- index — Added new page, updated stats
Key insight: This inverts the typical “smart orchestrator, dumb workers” pattern. Most agentic subtasks don’t need the smartest model — only the hard decisions do. This is a significant cost optimization pattern.
[2026-04-10] ingest | Multi-Agent Patterns (OpenClaw + Hermes)
- Source:
raw/articles/two-agents-openclaw-hermes.md - Original language: Russian (translated to English)
- Original URL: pimenov.ai
- Type: Architecture patterns / practical guide
Key concepts extracted:
- Dispatcher + Deep Worker pattern (one agent for breadth, one for depth)
- Six practical implementations: analysis, content pipeline, meeting prep, monitoring, code review, content marketing
- Self-learning agents that improve over time
- Personal model fine-tuning after ~1 month of usage
- Implementation priority order (start with content pipeline)
Pages created:
- automation/multi-agent-patterns — Comprehensive patterns guide
Pages updated:
- automation/ai-agent-organization — Added cross-link
- tools/claude-managed-agents — Added cross-link
- index — Added new page, updated stats
Key insight: “Two agents complementing each other beat one agent trying to do everything.” This validates the multi-agent approach in Managed Agents but shows it’s a broader pattern applicable to any tooling.
[2026-04-10] create | Expanded Managed Agents Knowledge
Building on the playbook ingest, created three derived pages:
Pages created:
- comparisons/managed-agents-vs-diy — Comprehensive comparison of Anthropic’s managed platform vs. building your own
- glossary/agent-outcomes — The Outcomes/Grader pattern for goal-oriented agents
- questions/managed-agents-break-even — Cost analysis exploration (seedling)
Pages updated with cross-links:
- tools/claude-managed-agents — Added links to new pages
- automation/ai-agent-organization — Connected to Managed Agents ecosystem
- index — Added new pages, updated stats
Key insight: The comparison page and break-even analysis are valuable for client conversations. “When should we build vs. buy?” is a common question.
[2026-04-10] ingest | Claude Managed Agents Playbook
- Source:
raw/articles/claude-managed-agents-playbook.md - Original language: Russian (translated to English)
- Original source: Telegram @prompt_design (translation of Anthropic docs)
- Type: Technical playbook / API documentation
Key concepts extracted:
- Claude Managed Agents = ready infrastructure (no custom orchestration needed)
- Four key concepts: Agent (config) → Environment (container) → Session (instance) → Events (stream)
- Built-in tools: bash, read, write, edit, glob, grep, web_fetch, web_search
- Permission system:
always_allowvsalways_askfor production safety - Usage patterns: event-triggered, scheduled, fire-and-forget, long-horizon
- Outcomes (research preview): grader-based completion criteria with iteration
- Multi-agent coordination (research preview): one-level delegation
- Architecture: “Brain” (Claude) + “Hands” (sandboxes) + “Session” (journal)
- Pricing: $0.08/hour + token costs
- Companies using it: Notion, Rakuten, Asana, Sentry, Vibecode
Pages created:
- tools/claude-managed-agents — Comprehensive tool documentation
Pages updated:
- index — Added new tool, updated stats
Cross-references created:
- Linked to glossary/ai-agent (agents explained)
- Linked to automation/ai-agent-organization (complementary techniques)
Business value: This is Anthropic’s official infrastructure for production AI agents — key for any enterprise deployment discussion. The permission system and outcomes features are particularly relevant for client implementations.
[2026-04-10] ingest | Telegram Community Wiki Bot (Case Study)
- Source:
raw/cases/telegram-community-knowledge-bot.md - Original language: Russian (translated to English)
- Type: Real-world case study
Key concepts extracted:
- LLM Wiki pattern validated in production
- Multi-source ingestion: chat messages + YouTube transcripts
- Zettelkasten methodology for knowledge structure
- Anti-recursion pattern (mark bot messages to skip indexing)
- Access control with separate knowledge bases
- Clickable profile links in answers
Pages created:
- cases/telegram-community-wiki-bot — Full case study
- glossary/zettelkasten — Connected notes methodology
Pages updated:
- index — Added Case Studies section
Cross-references created:
- Linked to glossary/llm-wiki-pattern (this IS the pattern in production!)
- Linked to tools/obsidian (supports Zettelkasten)
- Linked to automation/knowledge-management
Key insight: This proves the wiki pattern works at scale in real communities — “A wiki that writes itself.”
[2026-04-10] ingest | Product Article Generator System
- Source:
raw/articles/product-article-generator-system.md - Original location: Primores internal tool (
/primores/article-generator/) - Type: Internal tool documentation + methodology
Key concepts extracted:
- GEO/AEO (Generative Engine Optimization) — optimizing for AI search
- AI-SEO content strategy — what gets cited by AI
- Human writing rules — avoiding AI tell-signs
- Schema markup for AI discoverability
- Self-contained FAQ answers principle
- “Honest assessment increases citation” insight
Pages created:
- glossary/geo-aeo — Generative/Answer Engine Optimization explained
- seo/ai-seo-content — AI-SEO content strategy guide
- tools/product-article-generator — The tool itself (Primores service)
Pages updated:
- index — Added new pages, updated stats
Business value: This documents a Primores service offering — can be referenced in client conversations about AI-SEO capabilities.
[2026-04-10] ingest | 12 Techniques for AI Agents
- Source:
raw/articles/12-techniques-ai-agents-practical-tools.md - Original language: Russian (translated to English)
- Original URL: pimenov.ai
Key concepts extracted:
- “AI agent isn’t a magic button — requires organization”
- Context separation (different threads for different topics)
- Model specialization (route tasks to appropriate models)
- Sub-agent delegation pattern
- Six-layer security model
- Self-syncing documentation
- Subscription vs API economics
Pages created:
- automation/ai-agent-organization — Comprehensive guide to the 12 techniques
- glossary/ai-agent — Definition and context for AI agents
Pages updated:
- index — Added new pages, updated stats
Cross-references created:
- Linked to glossary/llm-wiki-pattern (documentation principle overlap)
- Linked to glossary/prompt-engineering (model-specific prompts)
- Linked to maintenance (similar logging/review principles)
[2026-04-10] ingest | LLM Wiki Pattern
- Source:
raw/articles/llm-wiki-pattern.md - Key concepts extracted:
- RAG vs. Wiki pattern (retrieve-and-forget vs. compile-and-maintain)
- Three-layer architecture (raw, wiki, schema)
- Three operations (ingest, query, lint)
- Compounding knowledge principle
- Memex historical connection (Vannevar Bush, 1945)
Pages created:
- glossary/rag — Retrieval-Augmented Generation explained
- glossary/llm-wiki-pattern — The compounding knowledge pattern
- tools/obsidian — The IDE for LLM wikis
- automation/knowledge-management — AI for knowledge management overview
Pages updated:
- index — Added new pages, added stats section
This source is meta — it describes the very pattern this wiki implements!
[2026-04-10] create | Added maintenance protocol
- Created maintenance — comprehensive wiki health and growth protocol
- Updated CLAUDE.md with:
- MAINTAIN workflow (daily/weekly/monthly/quarterly cadences)
- GROW workflow (proactive wiki development)
- Session start/end checklists
- Growth mindset principles
- Red flag warnings
- Updated index to include maintenance page
The wiki now has built-in growth mechanisms!
[2026-04-10] create | Wiki initialized
- Created directory structure
- Established schema in CLAUDE.md
- Created index, log, changelog, and llms.txt
- Added page templates
- Added starter content:
- Ready to start building knowledge!
Next steps:
- Ingest first sources
- Build out glossary with foundational terms
- Start exploring key questions