AI Visibility Audit — Claude Skill for GEO Assessment
AI Visibility Audit
TL;DR: A Claude skill that produces a defensible, scored audit (0-100 points) evaluating how well a domain is exposed to AI crawlers and generative search engines. Answers: “Can AI engines find us, parse us, and cite us?”
What It Does
This skill turns the emerging discipline of GEO/AEO (Generative Engine Optimization) into a repeatable audit methodology. It:
- Checks crawlability — robots.txt, llms.txt, sitemaps, live UA-spoofed fetches
- Assesses rendering — SSR vs CSR detection, visible text in raw HTML
- Evaluates on-page signals — Meta tags, Open Graph, JSON-LD, answer-first content
- Tests share-of-voice — Live queries to see if you appear in AI answers
- Measures authority — Wikipedia mentions, press coverage
The output is a markdown report written for marketing/brand owners, not developers.
Who It’s For
- Marketing leaders wanting to know if they’re visible in AI search
- SEO professionals expanding into GEO/AEO
- E-commerce teams preparing for agentic commerce
- Consultants offering AI visibility assessments
The Scoring System
| Dimension | Points | What It Measures |
|---|---|---|
| Crawlability | 25 | Can AI bots access your content? |
| Rendering | 25 | Is content visible without JavaScript? |
| On-page Signals | 20 | Schema, meta tags, answer-first structure |
| Share-of-Voice | 20 | Do you appear in live AI answers? |
| Authority | 10 | Wikipedia, press, domain trust signals |
Total: 100 points
Score Interpretation
| Score | Verdict |
|---|---|
| 80-100 | Excellent — Well-optimized for AI visibility |
| 60-79 | Good — Minor gaps to address |
| 40-59 | Fair — Significant optimization needed |
| 20-39 | Poor — Major blockers present |
| 0-19 | Critical — Likely invisible to AI |
How It Works
Technical Architecture
┌─────────────────────────────────────────────────────────┐│ Claude Skill │├─────────────────────────────────────────────────────────┤│ SKILL.md (workflow orchestration) ││ │ ││ ├── scripts/audit.py (orchestrator) ││ │ ├── check_crawlers.py ││ │ ├── check_rendering.py ││ │ ├── check_onpage.py ││ │ └── generate_prompts.py ││ │ ││ ├── references/ (knowledge base) ││ │ ├── ai_crawlers.md (~100 bot UAs) ││ │ ├── scoring_rubric.md (point values) ││ │ ├── llms_txt_spec.md ││ │ └── remediation_playbook.md ││ │ ││ └── report_template.md │└─────────────────────────────────────────────────────────┘The 5-Step Workflow
- Workspace Setup — Creates structured directories for raw data and parsed results
- Technical Audit — Python scripts fetch and analyze robots.txt, HTML, meta tags
- Interpretation — Claude reads JSON findings + scoring rubric, computes scores
- Live Share-of-Voice — User-approved queries run via WebSearch
- Final Report — Markdown report with evidence, scores, and prioritized fixes
Key Technical Innovations
Live User-Agent Spoofing
The skill fetches the same URL with different bot identities:
- GPTBot (OpenAI)
- ClaudeBot (Anthropic)
- PerplexityBot
- Googlebot
This catches CDN/WAF blocks that are invisible in robots.txt but silently block AI crawlers.
SSR/CSR Detection
Measures visible text characters in raw HTML + checks for SPA markers (React, Angular, Vue). A page with 500 chars of visible text and data-reactroot is flagged as CSR-dependent.
Hard Blocker Detection
Certain findings zero-out entire dimensions:
- Global
Disallow: /in robots.txt noindexon homepage- CDN returning 403 to all AI bots
These trigger URGENT flags regardless of total score.
What I Tested
Tested on Lithuanian e-commerce sites:
| Site | Score | Key Findings |
|---|---|---|
| pigu.lt (home) | 79/100 | WAF blocking all deep pages for AI bots |
| varle.lt (blog) | 80/100 | llms.txt misconfigured as redirect |
| example.com | 38/100 | Baseline — no AI optimization |
| anthropic.com | 52/100 | Good but SSR issues on some pages |
See AI Visibility: E-commerce Audit for detailed findings.
What Works Well
- ✅ Repeatable methodology — Run again in 3 months to measure improvement
- ✅ Evidence-based — Every finding includes exact robots.txt lines, HTTP status codes
- ✅ Catches hidden issues — UA spoofing reveals blocks invisible to other tools
- ✅ Non-technical output — Report written for marketing, not DevOps
- ✅ Prioritized actions — Top 5 fixes sorted by impact/effort ratio
Limitations
- ⚠️ Cannot confirm training data inclusion — No way to know if you’re in GPT’s training set
- ⚠️ Single-language audit — Doesn’t check hreflang or multi-language versions
- ⚠️ SSR detection is heuristic — Not 100% accurate for edge cases
- ⚠️ Share-of-voice is directional — 3-5 queries ≠ comprehensive coverage
- ⚠️ Requires Claude — Not a standalone tool
How to Use
In Claude Desktop / Claude Code
- Install the skill folder to your skills directory
- Trigger with: “Run an AI visibility audit on [domain]”
- Approve workspace creation
- Review and approve share-of-voice queries
- Receive markdown report
Trigger Phrases
The skill activates on:
- “AI visibility audit”
- “GEO audit”
- “AEO assessment”
- “Check if AI can find [domain]”
- “LLM SEO audit”
- “Are we visible to ChatGPT?”
Report Structure
The output report includes:
- Executive Summary — 1-2 paragraph overview for leadership
- Score Breakdown — Table with all 5 dimensions
- Dimension Details — Narrative + collapsible evidence blocks
- Top 5 Actions — Prioritized fixes with impact/effort ratings
- What I Couldn’t Check — Honest limitations
- Re-run Instructions — How to track progress over time
Best Use Cases
- Pre-launch assessment — Check AI visibility before major site changes
- Competitive benchmarking — Compare your score against competitors
- Quarterly reviews — Track improvement over time
- Client deliverables — Professional audit reports for consulting
Key Takeaways
- Produces defensible 0-100 scores across 5 dimensions
- Catches WAF/CDN blocks invisible to standard tools
- Separates automated checks (Python) from interpretation (Claude)
- Output is written for business stakeholders, not developers
- Repeatable — run again to measure improvement
Related
- seo/ai-visibility — The concept this skill measures
- seo/agentic-search-optimization — Optimization tactics
- glossary/geo-aeo — What GEO/AEO means
- glossary/skill — What Claude skills are
- experiments/ai-visibility-ecommerce — Test results from e-commerce audits
Sources
- Internal skill:
/Documents/_tasks/experiment/01-ai-visibility/ - llms.txt specification
- Model Context Protocol
Last tested: 2026-04-15