Skip to content

AI Visibility Audit — Claude Skill for GEO Assessment

AI Visibility Audit

TL;DR: A Claude skill that produces a defensible, scored audit (0-100 points) evaluating how well a domain is exposed to AI crawlers and generative search engines. Answers: “Can AI engines find us, parse us, and cite us?”

What It Does

This skill turns the emerging discipline of GEO/AEO (Generative Engine Optimization) into a repeatable audit methodology. It:

  1. Checks crawlability — robots.txt, llms.txt, sitemaps, live UA-spoofed fetches
  2. Assesses rendering — SSR vs CSR detection, visible text in raw HTML
  3. Evaluates on-page signals — Meta tags, Open Graph, JSON-LD, answer-first content
  4. Tests share-of-voice — Live queries to see if you appear in AI answers
  5. Measures authority — Wikipedia mentions, press coverage

The output is a markdown report written for marketing/brand owners, not developers.

Who It’s For

  • Marketing leaders wanting to know if they’re visible in AI search
  • SEO professionals expanding into GEO/AEO
  • E-commerce teams preparing for agentic commerce
  • Consultants offering AI visibility assessments

The Scoring System

DimensionPointsWhat It Measures
Crawlability25Can AI bots access your content?
Rendering25Is content visible without JavaScript?
On-page Signals20Schema, meta tags, answer-first structure
Share-of-Voice20Do you appear in live AI answers?
Authority10Wikipedia, press, domain trust signals

Total: 100 points

Score Interpretation

ScoreVerdict
80-100Excellent — Well-optimized for AI visibility
60-79Good — Minor gaps to address
40-59Fair — Significant optimization needed
20-39Poor — Major blockers present
0-19Critical — Likely invisible to AI

How It Works

Technical Architecture

┌─────────────────────────────────────────────────────────┐
│ Claude Skill │
├─────────────────────────────────────────────────────────┤
│ SKILL.md (workflow orchestration) │
│ │ │
│ ├── scripts/audit.py (orchestrator) │
│ │ ├── check_crawlers.py │
│ │ ├── check_rendering.py │
│ │ ├── check_onpage.py │
│ │ └── generate_prompts.py │
│ │ │
│ ├── references/ (knowledge base) │
│ │ ├── ai_crawlers.md (~100 bot UAs) │
│ │ ├── scoring_rubric.md (point values) │
│ │ ├── llms_txt_spec.md │
│ │ └── remediation_playbook.md │
│ │ │
│ └── report_template.md │
└─────────────────────────────────────────────────────────┘

The 5-Step Workflow

  1. Workspace Setup — Creates structured directories for raw data and parsed results
  2. Technical Audit — Python scripts fetch and analyze robots.txt, HTML, meta tags
  3. Interpretation — Claude reads JSON findings + scoring rubric, computes scores
  4. Live Share-of-Voice — User-approved queries run via WebSearch
  5. Final Report — Markdown report with evidence, scores, and prioritized fixes

Key Technical Innovations

Live User-Agent Spoofing

The skill fetches the same URL with different bot identities:

  • GPTBot (OpenAI)
  • ClaudeBot (Anthropic)
  • PerplexityBot
  • Googlebot

This catches CDN/WAF blocks that are invisible in robots.txt but silently block AI crawlers.

SSR/CSR Detection

Measures visible text characters in raw HTML + checks for SPA markers (React, Angular, Vue). A page with 500 chars of visible text and data-reactroot is flagged as CSR-dependent.

Hard Blocker Detection

Certain findings zero-out entire dimensions:

  • Global Disallow: / in robots.txt
  • noindex on homepage
  • CDN returning 403 to all AI bots

These trigger URGENT flags regardless of total score.

What I Tested

Tested on Lithuanian e-commerce sites:

SiteScoreKey Findings
pigu.lt (home)79/100WAF blocking all deep pages for AI bots
varle.lt (blog)80/100llms.txt misconfigured as redirect
example.com38/100Baseline — no AI optimization
anthropic.com52/100Good but SSR issues on some pages

See AI Visibility: E-commerce Audit for detailed findings.

What Works Well

  • Repeatable methodology — Run again in 3 months to measure improvement
  • Evidence-based — Every finding includes exact robots.txt lines, HTTP status codes
  • Catches hidden issues — UA spoofing reveals blocks invisible to other tools
  • Non-technical output — Report written for marketing, not DevOps
  • Prioritized actions — Top 5 fixes sorted by impact/effort ratio

Limitations

  • ⚠️ Cannot confirm training data inclusion — No way to know if you’re in GPT’s training set
  • ⚠️ Single-language audit — Doesn’t check hreflang or multi-language versions
  • ⚠️ SSR detection is heuristic — Not 100% accurate for edge cases
  • ⚠️ Share-of-voice is directional — 3-5 queries ≠ comprehensive coverage
  • ⚠️ Requires Claude — Not a standalone tool

How to Use

In Claude Desktop / Claude Code

  1. Install the skill folder to your skills directory
  2. Trigger with: “Run an AI visibility audit on [domain]”
  3. Approve workspace creation
  4. Review and approve share-of-voice queries
  5. Receive markdown report

Trigger Phrases

The skill activates on:

  • “AI visibility audit”
  • “GEO audit”
  • “AEO assessment”
  • “Check if AI can find [domain]”
  • “LLM SEO audit”
  • “Are we visible to ChatGPT?”

Report Structure

The output report includes:

  1. Executive Summary — 1-2 paragraph overview for leadership
  2. Score Breakdown — Table with all 5 dimensions
  3. Dimension Details — Narrative + collapsible evidence blocks
  4. Top 5 Actions — Prioritized fixes with impact/effort ratings
  5. What I Couldn’t Check — Honest limitations
  6. Re-run Instructions — How to track progress over time

Best Use Cases

  1. Pre-launch assessment — Check AI visibility before major site changes
  2. Competitive benchmarking — Compare your score against competitors
  3. Quarterly reviews — Track improvement over time
  4. Client deliverables — Professional audit reports for consulting

Key Takeaways

  • Produces defensible 0-100 scores across 5 dimensions
  • Catches WAF/CDN blocks invisible to standard tools
  • Separates automated checks (Python) from interpretation (Claude)
  • Output is written for business stakeholders, not developers
  • Repeatable — run again to measure improvement

Sources


Last tested: 2026-04-15