AI Tells in Sales Copy — Operator-Grade Audit Checklist
AI Tells in Sales Copy
TL;DR: Effective sales copy turns on two disciplines, not one. (1) Don’t sound like AI — the eleven-pattern operator-grade catalog below catches the surface failure modes that operator and CMO prospects detect and bounce on. (2) Model the reader’s motivation before drafting — the structural pain of an iGaming brand owner is not the structural pain of a DTC CMO, and no amount of voice polish recovers from getting the argument-level frame wrong. The audience-mode review beat operationalizes the first; reader-motivation modeling operationalizes the second; both are required.
Two principles for client-facing copy
The catalog and review discipline below are necessary but not sufficient. Sales-page work converges on two distinct principles that catch different categories of failure:
-
Don’t sound like AI. Surface failure modes — rhythm, jargon, abstraction, factual overreach — that signal to operators “this writer can’t distinguish operator-language from consultant-language, so probably can’t run the engagement either.” The eleven-pattern catalog below is the field-tested checklist.
-
Model the reader’s motivation before drafting. Argument-level structural pain — what does the actual reader at the actual moment of reading actually face every quarter? Different verticals have structurally different pains; the same copy adapted across verticals will fail if the structural pain isn’t re-modeled. This is the proactive discipline, applied at the writing stage rather than the editing stage.
The user-articulation that triggered naming both together (strategist session 2026-05-13):
“The key is the ability not to sound like AI and also look from the perspective of the reader — understand the reader’s motivation.”
The two principles operationalize two distinct review postures:
| Principle | When it activates | What it catches |
|---|---|---|
| Don’t sound like AI | Drafting + editing (especially the audience-mode review pass) | Surface tells: rhythm overdensity, jargon leakage, factual overreach, voice problems |
| Model the reader’s motivation | Before drafting starts | Argument-level mismatches: wrong structural pain, wrong vertical assumption, wrong urgency framing |
Both are non-optional for client-facing copy. The two-principle frame is what keeps the audit cycle moving the believability score; addressing only one and not the other plateaus.
Model the reader before drafting
Who’s actually reading this page? Not the abstract persona (“DTC CMO”, “iGaming brand owner”) but the concrete reader at a specific moment: time-poor, skeptical of cold outreach, scanning for reasons to stop reading. What’s their structural pain — the recurring quarterly reality that shapes how they evaluate any new pitch?
Two examples from 2026-05-13 sales-package work make the principle concrete:
| Audience | Structural pain (the load-bearing reality, not the surface complaint) |
|---|---|
| DTC CMO at $5–50M brand | Rising paid CAC, attribution decay post-iOS 14.5, creative fatigue compressing campaign half-life, funnel-fragility from paid-only acquisition — every dollar of growth depends on Meta/Google not raising prices, and they always do |
| iGaming brand owner | Reach scarcity — paid bans on Meta / Google / Telegram official Ads collapse the addressable surface to a fraction of what DTC operators take for granted. Plus affiliate revshare drag (the largest acquisition channel is also the most expensive), single-account ban risk on 30–90 day cycles, owned-audience absence as a category-level structural deficit |
These differ structurally, not just by vocabulary. A DTC CMO’s pain is unit-economics fragility on a working channel. An iGaming brand owner’s pain is the channel itself doesn’t exist for them at platform scale. Reframing the iGaming pitch as “improve your DTC-style funnel economics” lands in the wrong universe; reframing it as “build reach where you’ve been structurally locked out” lands in the right one.
The same point applies even within a category. The marketing/telegram-marketing-channel page documents this at the channel-fit layer: the Telegram pitch lands differently for Russian/CIS/Iranian fashion DTC (Telegram is the only commerce surface that hasn’t been banished) vs. Tier-1 US apparel DTC (Telegram is a poor-fit add-on channel for an audience that’s not there). The argument-level frame has to be re-modeled for each.
Reference case: the iGaming reach-scarcity reframe (2026-05-13)
A concrete instance from the audit cycle that produced this page. The first-pass copy for an iGaming TT-content sales page used DTC-style framing — “rented FTDs vs. owned audience”, the kind of paid-traffic-economics argument that resonates with a DTC CMO who already has working paid channels and wants to reduce their dependency on them.
The structural pain was wrong for the vertical. iGaming operators don’t have working paid channels to reduce dependency on — they have reach scarcity because the major paid surfaces (Meta Ads, Google Ads, official Telegram Ads) ban gambling outright. The “rented vs. owned” frame implicitly assumes there’s a rented option that’s currently working; for iGaming, that option largely doesn’t exist.
The fix was argument-level, not voice-level. Re-framing the entire page around “build reach where you’ve been structurally locked out of the major paid surfaces” turned the same content into a different pitch — one that addressed the actual structural reality of the audience.
No amount of audience-mode review would have caught this. Audience-mode catches tells, jargon, factual overreach, rhythm problems. It doesn’t catch “the argument is correctly written for the wrong vertical.” That requires modeling the structural pain before drafting, then writing into the frame that matches.
How to activate proactive reader-motivation modeling
The discipline is small but consequential. Before drafting, write a half-page on:
- Who is the concrete reader, at the moment of reading? Job title is not enough. The role’s structural reality this quarter is the unit of analysis.
- What is their structural pain? Not what you can solve — what they actually face whether or not your offering exists. The structural pain is the load-bearing reality the offer has to land into.
- What’s the load-bearing reframe? Given their structural pain, what claim about their situation would make them read the rest of the page? The reframe is the test — if you can’t articulate it in one sentence, the draft is going to fight you the whole way down.
- What pain are you NOT addressing? Naming what’s out of scope keeps the draft from drifting into adjacent-but-wrong pitches. The clarifying-by-exclusion move.
Skipping this stage and going straight to drafting is the most common cause of “the prose is fine but the page doesn’t land” outcomes. Voice-polishing a misaligned argument doesn’t realign it.
The eleven-pattern catalog (operator-grade audit checklist)
This is the operationalization of Principle 1 — the editing-pass discipline. Each row names the tell, why it reads AI to operators, and the operator-language fix.
Why this catalog exists
A sales-page audit on 2026-05-13 (an iGaming-DTC LP for cold CMO outreach) needed to move from roughly 6.5/10 CMO-believability to 9/10. The structural argument was solid; the prose kept catching on AI-shaped tells. Each tell that got caught and replaced with operator-language equivalent moved the believability score meaningfully. The patterns generalize — every future sales-copy audit benefits from running them as a checklist.
CMOs and operators are the audience class most sensitive to these tells, for two reasons: they read a lot of pitch copy (the noise floor for “AI slop” is high in their inbox), and they evaluate the writer’s judgment through the prose (if the writer can’t distinguish operator-language from consultant-language, the writer probably can’t run the engagement either).
The honest framing: AI tells are not just “AI did it” markers. They’re tells the writer hasn’t read the page as the audience would. Some show up in human-only writing too. The reason they’re more common in AI-assisted writing is that AI optimizes for rhythm and parallelism in a way human writers don’t unless they specifically try.
The eleven tells
| # | Tell | Why it reads AI to operators | Operator fix |
|---|---|---|---|
| 1 | Math notation in prose — “cadence × format mismatch”, “pattern × niche testing” | Engineer-thinking shape; operators don’t speak in product-set notation | Plain prose comparison: “the cadence and the format don’t match” |
| 2 | Internal jargon leaking out — “Discovery-Before-Scale operationalized”, “the LP case” | Reader has zero context for an internal framework name or an abstract handle. The shorthand was for the writer, not the reader | Concrete reference: “test in low volume, scale what works”, “the case study on our site” |
| 3 | Abstractions where concrete would land — “€0 incremental to reactivate”, “operator-felt rather than formula-derivable” | Consultant-speak; the reader can’t visualize the thing. Operators read for picture, not for abstraction | Concrete operator-language: “free to message them again”, “you read it in blended CAC and cohort revenue” |
| 4 | Parallel-construction overdensity — 4+ “X; Y” semicolon pairs in one section | AI-template feel; too neat. Operators recognize the rhythm even if they can’t name it | Vary syntactic shape across comparisons (some semicolons, some periods, some em-dashes, some colon+phrase, some flowing-clause) |
| 5 | Adjective stacking — 4+ adjectives in a row, e.g. “fast, native, varied, high-frequency content” | Rhetorical excess; loses punch. Operators discount intensity-stacked claims | Trim to 3 max; drop the one most redundant with context |
| 6 | Coined-term over-use — “funnel-fragile”, “ads evaporate”, “uncapped tail” in one page | Trying-too-hard tone; signals the writer is performing cleverness | One coined term per page, max — earn the line |
| 7 | ”X, not Y” pattern repeated 3+ times in close proximity | Rhythm becomes detectable as device rather than thought | Vary or cut some; keep the strongest |
| 8 | Verbal tics — same load-bearing word 5+ times (e.g. “compound” 6× in one page) | Reader notices the repetition; signals AI generation or undisciplined editing | Pick 2 strongest, replace others with synonyms or restructure |
| 9 | Factual overreach in service of rhythm — “Paid CPMs climb every quarter” (Q1 is typically cheaper); “creators ship 5×/day” (actually 1–3×); “ports straight into Meta” (requires re-edits) | Sharp readers catch one wrong claim and discount the entire doc | Honest claims even when less rhythmic. The rhythm hit is worth less than the trust hit |
| 10 | Em-dash split overuse | Em-dashes are a well-recognized AI-output marker in 2026. Operators specifically pattern-match on them | Mix periods, em-dashes, colons, conjunctions across the doc. Em-dashes are powerful when rare |
| 11 | ”Strategist-memo voice” — meta-commentary about the argument instead of stating substance (“To understand why this matters, we should first establish…”) | Reader feels lectured-at rather than sold-to. Operators want the substance, not the architecture | State substance directly; never narrate the structure of the substance. Cut every sentence that talks about what the next paragraph will do |
The catalog is field-tested but not exhaustive. The audit that produced it caught these eleven; longer audits would surface others. Treat it as a starting checklist, not a closed list.
The read-as-audience review beat (operationalizing Principle 1 at the editing layer)
The single most consequential editing discipline for client-facing copy is re-reading the draft as the target audience would read it, not as the writer who just wrote it. This is the review-stage companion to proactive reader-motivation modeling — the proactive discipline gets the argument frame right; the review discipline catches the surface tells that would otherwise undercut a correctly-framed argument.
The two modes catch different things:
| Mode | What it catches | What it misses |
|---|---|---|
| Writer-mode | Argument flaws, missing claims, weak proof, logic gaps, structural problems | Tells, jargon, factual overreach, trust erosion, comprehension friction, what would get skipped |
| Audience-mode | Tells, jargon, factual overreach, trust erosion, comprehension friction, what would get skipped | Argument flaws, missing claims, weak proof, logic gaps, structural problems |
Both passes are required. Writer-mode validates that the substance holds together. Audience-mode validates that the reading experience delivers that substance to the actual target audience.
How to activate audience-mode
Name the audience in concrete detail before re-reading. “Imagine yourself as a busy e-com CMO at a $5–50M DTC brand reading this from a cold email.” The specificity matters — a generic “potential customer” doesn’t activate the heuristics a specific persona does.
Then read the draft cold, in one pass, the way the prospect would. The questions to hold:
- Where does the eye skip?
- Where would the prospect close the tab?
- What sentence feels like a setup rather than information?
- What claim, if false, would discredit the whole doc?
- Which paragraph is the writer talking to themselves?
Audience-mode is uncomfortable because it forces the writer to read their own work as if it might be bad. That discomfort is the signal the pass is working.
The CMO-believability score
A useful companion heuristic: after each audience-mode pass, give the doc a rough 1–10 score on how a target prospect would rate its believability. This is self-pressure, not measurement — the number is a forcing function for honesty, not a metric.
- ≤5/10 — Don’t send. Audit is producing real signal; the doc has multiple unresolved tells or a fundamental positioning problem.
- 6–7/10 — Argument is solid; the prose is in the way. Most audit cycles live here. Each tell caught and fixed moves the score 0.5–1 point.
- 8–9/10 — Sendable. The remaining gap is usually structural (one weak proof point, one positioning beat that doesn’t land) rather than prose-level.
- 10/10 — Suspect the score. No live cold-outreach piece ever reads as 10/10 to its writer; if it does, audience-mode hasn’t activated.
The score is most useful as a delta tool: track it before and after each editing pass. A pass that doesn’t move the score is a pass that worked at the wrong layer.
When this catalog applies (and when it doesn’t)
Applies: sales-page copy, landing pages, cold outreach, pitch decks, proposals — anything where the reader is a prospect evaluating the writer’s judgment along with the offer.
Doesn’t apply (or applies less): internal documentation, technical reference material, post-engagement client communication, peer-to-peer industry writing. The audience changes; the tells that bounce an operator don’t bounce another writer or an internal stakeholder.
Edge case — long-form thought leadership (essays, posts, articles): the catalog mostly applies, with one exception. Em-dash overuse (#10) is more forgivable in long-form because the rhythm cost compounds less when reading time is longer.
Honest limits
- The catalog is from a single audit cycle. It’s the patterns this audit surfaced, not a literature review of every AI-tell pattern. Other audits will turn up others.
- The CMO-believability score is self-assessment. It correlates with audience reception in practice but is not measured against actual response rates. Treat it as forcing function, not metric.
- Some tells are stylistic preferences, not universal defects. Em-dash overuse reads as AI to one operator and reads as Cormac McCarthy to another. The catalog tracks what the target audience class flags, not what’s objectively bad writing.
- The catalog is moving. Models keep getting better at avoiding the most obvious tells; the catalog will need to be re-tested on a rolling basis. The tells of mid-2026 are not the tells of 2025; they probably won’t be the tells of 2027 either.
- The fix is craft, not avoidance. “Avoid all em-dashes” produces stilted prose. The fix is variety — mix em-dashes with periods, colons, conjunctions, so no rhythm element dominates.
Connection to wiki frameworks
- glossary/honest-assessment — same family of “what makes content trustworthy to AI engines and human readers” thinking, flipped. Honest assessment is a positive trust signal (acknowledging weaknesses builds credibility); AI tells are negative trust signals (rhythm and abstraction excess erode credibility). Both depend on the same underlying mechanism: the audience reads the writer’s judgment through the prose.
- marketing/brand-voice-skills-guide — the LLM-instruction side of the same problem. The brand-voice skill defines what to sound like (in the SKILL.md and example library); this page defines what not to sound like (in the editing pass). The two are complementary disciplines — instructions plus audits.
- marketing/discovery-before-scale — the operational analog. Don’t scale un-validated patterns; don’t ship un-audited copy. Both are validation-before-volume disciplines applied to different layers of marketing work.
- glossary/recognition-primed-decision — explains why operators pattern-match on tells so quickly. Klein-Kahneman: pattern-matching judgment is reliable in high-validity environments with rapid feedback. Operators read pitch copy daily; their feedback loop on “good copy vs. AI slop” is fast and validity-rich. The tells trigger their pattern recognition before any conscious analysis happens.
Related
- glossary/honest-assessment — Positive trust signal counterpart
- marketing/brand-voice-skills-guide — The LLM-instruction-side discipline
- marketing/discovery-before-scale — Validation-before-volume at the operations layer
- glossary/recognition-primed-decision — Why operators detect tells in seconds rather than minutes
- glossary/vibe-coding — The vibe-coded code analog: code that works on the happy path can fail on edge cases the writer didn’t read for. Same posture problem, different artifact.
- strategist-pattern — The CMO-believability score is a strategist-pattern self-pressure heuristic at the artifact-review layer
- marketing/ai-human-voice-prompting — Sister page on the prompting/generation side. That page covers six techniques for producing human voice; this page covers eleven-pattern audit catalog + two-principle frame for editing/reviewing it. Generation-side + editing-side are complementary disciplines, both required
Key Takeaways
- Sales-page work converges on two principles, not one: (1) don’t sound like AI, and (2) model the reader’s motivation before drafting. The two catch different categories of failure and both are non-optional.
- Principle 2 catches what Principle 1 cannot. Argument-level mismatches (wrong structural pain for the vertical) survive any amount of voice polish. The iGaming reach-scarcity reframe case is the canonical example: DTC-style “rented vs. owned” framing was correctly written for the wrong vertical, and no audience-mode review pass would have caught it.
- The eleven-pattern catalog is the field-tested checklist for Principle 1. The two highest-leverage tells (by impact in the audit cycle that produced this list) are factual overreach in service of rhythm (#9) — sharp readers discount the whole doc on one false claim — and strategist-memo voice (#11) — meta-commentary about the argument feels like the writer talking to themselves rather than selling to the reader.
- Audience-mode review is non-optional for client-facing copy. Writer-mode catches argument flaws; audience-mode catches tells. Both passes are required because they catch different categories of failure.
- Name the audience in concrete detail before audience-mode activates. “Busy e-com CMO at a $5–50M DTC brand reading from cold email” works; “potential customer” doesn’t.
- The CMO-believability score (rough 1–10 self-assessment) is a useful forcing function for honesty, especially as a delta tool — track it before and after each editing pass. A pass that doesn’t move the score worked at the wrong layer.
- AI tells are not just “AI did it” markers; they’re tells the writer hasn’t read the page as the audience would. The fix is the audience-mode discipline plus the proactive reader-motivation modeling — not “avoid AI.”
Sources
- Field audit, 2026-05-13:
primores/2026-05-07-package-igaming-dtc-offer/tt-content-dtc.html— sales page evolved over the day from ~6.5/10 to 9/10 CMO-believability. Each tell in the catalog above corresponds to specific before/after edits in the git history of that page. - The iGaming reach-scarcity reframe (same audit cycle) is the reference case for Principle 2 — the first-pass copy used a DTC-style “rented vs. owned” frame; the corrected frame surfaced reach scarcity as the actual structural pain for iGaming brand owners (paid bans on Meta / Google / Telegram official Ads). The fix was argument-level, not voice-level.
- Strategist session 2026-05-13 —
session-signals.mdentry. The catalog was produced during the page-by-page review beat; the audience-mode methodology + the two-principle frame were filed the same day to the strategistvoice.md(“Writing client-facing copy — two principles” section) as a generalizable discipline. - User articulation (verbatim, strategist session 2026-05-13 close): “The key is the ability not to sound like AI and also look from the perspective of the reader — understand the reader’s motivation.” This is the trigger that named both principles together rather than treating one as a sub-discipline of the other.
- glossary/recognition-primed-decision — Klein-Kahneman theoretical foundation for why the audience pattern-matches on tells faster than conscious analysis.