AI Human Voice for Social Posts and Outreach — Six Techniques + Platform Tactics
AI Human Voice for Social Posts and Outreach
TL;DR: In 2026, every major social platform algorithmically demotes content that reads as AI-generated, and cold-email deliverability has bifurcated (AI: 71% inbox / 8% spam-flag; human: 86% inbox / 3% spam-flag). Voice is now a reach constraint, not a taste preference. Six prompting techniques produce human voice when applied together: (1) a voice-profile document grounded in what you reject, (2) 2–5 high-quality few-shot examples, (3) a 2026-consensus banned-word list, (4) lived-experience anchors (the most powerful, least-prompted technique), (5) voice-from-audio dictation (SuperWhisper-style — voice is yours by construction), and (6) audience-mode review (per marketing/ai-tells-in-sales-copy). Every successful implementation at scale follows the 80/95–5/20 hybrid ratio — AI drafts most of the volume, human edits the final layer. Zero documented case studies of 100% AI-unreviewed content working at scale.
Why this matters more in 2026 than it did in 2024
The 2026 reality is qualitatively different from earlier “AI content is bad” handwringing. Platforms have moved from passive disapproval to active algorithmic demotion:
LinkedIn — 360Brew is a 150-billion-parameter LLM that became the production ranking model on March 12, 2026 (Hristo Danchev’s announcement on the LinkedIn Engineering Blog). It reads every post semantically. It identifies AI-text patterns directly — uniform sentence rhythm, predictable transitions, vocabulary characteristic of LLM outputs — and reduces distribution accordingly. Posts that pulled thousands of impressions in 2024 now pull “a few hundred” if they trigger the AI-pattern detection. Saves outweigh likes as ranking signals: 50 saves beats 500 likes for algorithmic push. Comments outweigh likes. The algorithm rewards topic consistency (2–3 core professional topics stuck with over time) and what LinkedIn engineers call a “contextual content ecosystem” — published content connected to overall profile presence.
X / Twitter — Grok-powered ranking reads every post and watches every video. AI-generated text reliably produces likes but fails to produce replies or bookmarks (the higher-weighted signals). X is testing pre-share AI alerts. The EU AI Act becomes fully applicable August 2, 2026 — AI-generated content must be labeled in a machine-readable way “where technically feasible.” Text-only posts outperform video by ~30% on X (the only major platform where text wins), so written voice carries more weight here than elsewhere.
TikTok — C2PA Content Credentials detect synthetic media automatically; the platform has labeled 1.3 billion AI-generated videos. The good news for writers: AI-assisted script writing, hooks, and captions are explicitly exempt from labeling requirements. Only synthetic faces, voices, and realistic-scene visuals require labels. High-quality AI-scripted content with completion rates above 65% performs within 5–8% of human-equivalent reach; low-quality AI content underperforms by 30–45%. The Creator Rewards Program excludes AI-dominated content from monetization.
Cold email — deliverability is the silent killer. AI-written emails get spam-flagged at 8% vs 3% for human and achieve 71% inbox placement vs 86% for human (Prospectory 2026, n=10,000 emails; Instantly Cold Email Benchmark 2026). Reply rate gap: 4.1% AI vs 5.2% human, closing from a 2.0pp gap in 2024 to 1.1pp in 2026 — but the deliverability gap is structural and compounds across sequences. AI emails average 1.6 exchanges before booking a meeting vs 3.7 for human-written — AI fails in the follow-up, not the opener.
The takeaway is operational: writing in human voice is now a distribution problem, not a taste problem. The same AI-text that scored fine on internal aesthetics review now loses ~70% of its reach on LinkedIn, gets spam-flagged 2.7× more in cold email, and triggers warning labels on X. The cost of “good-enough AI” went up because the platforms changed.
The detection science — what actually gives AI away
Stylometry research (Nature 2025, plus the 2025–2026 wave of LLM-detection papers) converges on a small set of high-signal markers. Combined feature sets reach F1 ≈ 0.94 on text-class classification. The top distinguishers:
Lexical and structural markers (the academic anchors):
| Marker | What AI does | What humans do |
|---|---|---|
| Lexical diversity | Lower — AI uses statistically “safe” words | Higher — uncommon phrasings, high perplexity |
| Syntactic structure | Common patterns dominate | High burstiness (variance) — short sentences next to longer ones |
| Sentence length | Tighter distribution around the mean | Wider distribution; outliers in both directions |
| Function-word bigrams | Recognizable signature shared across LLMs | Idiosyncratic across humans |
| Transitions | Rigid, formulaic (“In fact”, “Moreover”, “Furthermore”) | Varied or absent |
| Hedging | Frequent and patterned | Sparse or context-driven |
| Lived experience | Absent — abstract, generalizable | Present — named places, dated incidents, specific people |
The single most important entry is the last one — absent lived experience. AI text reliably gravitates toward abstraction; human writing references concrete particulars. This is the load-bearing distinction, and it’s also the most leverageable in prompting.
The em-dash signal: GPT-4.1 produces em dashes at 3.28× the human rate in standard prose. McGill University’s Office for Science and Society documented the pattern; multiple practitioner sources have confirmed it. Em-dash density is one of the most reliable single-marker detectors even an untrained human reader uses unconsciously. 2026 best practice: maximum one em dash per response. In social posts and cold outreach, zero is better.
Detection accuracy limits: stylometric detection runs 80–95% accurate when authors stay in a single tone/genre and produce enough material. Accuracy drops significantly when authors change tone, change genre, or use AI assistance. This is the leverage point — voice-edited AI text that mixes human and AI patterns is much harder to detect than raw AI output. The detectors don’t have a clean signal to lock onto. This is why the 80/20 hybrid ratio works: it intentionally produces mixed signal.
The six techniques that produce human voice
1. The voice-profile document (.md file)
A persistent system-instruction document that loads before every session. The non-obvious finding from the 2026 practitioner literature: most of a good voice profile is about what you REJECT, not what you like.
The asymmetry is structural. AI knows hundreds of “professional, conversational, casual, warm” voices by default. Telling it “I like conversational writing” hits a generic average of all conversational voices it’s seen. Telling it “I never use semicolons because they sound like a college essay” gives it a specific constraint that narrows the output space to the part you actually inhabit.
The voice profile should cover:
- Professional context — what you do, who you serve, what you’re known for
- Audience — concretely named (not “potential customers” but “founder at a $5–50M Series-B SaaS reading on the train”)
- Banned words and phrases — the consensus 2026 list below, plus your idiosyncratic bans
- Structural rules — “no bullet-point lists in posts under 200 words”, “no questions in the first sentence”, “every paragraph needs a specific named anchor”
- 3–5 actual writing samples — your best work in this channel
- Anti-examples — pieces of your writing that missed the voice, and why
Ruben Hassid’s “Taste Interviewer” prompt (May 2026 release) operationalizes this — a 100-question structured interview across four categories: Beliefs & Contrarian Takes, Writing Mechanics, Aesthetic Crimes, Voice & Personality. Output: a single .md file that captures the writer’s DNA in a form any AI can load. The deeper insight from his work: prompts can’t capture what you reject, can’t be transferred between AIs, can’t be shared with a team, can’t persist across chats. A document can.
2. Few-shot examples — two to five, quality over quantity
The academic literature converges: strong accuracy gains from one to two examples; diminishing returns beyond four to five (IBM, PromptHub, mem0 — multiple 2026 sources). For brand voice specifically, the practitioner consensus is 2–3 high-quality samples of exactly what you want — pasted before the request, not described.
Two examples of your actual writing tell the model more about your voice than any description can. The model picks up: cadence, sentence-length pattern, vocabulary distribution, structural moves, paragraph weighting. Token costs scale linearly with examples; accuracy plateaus. So:
- Right approach: 2 perfect samples that exemplify the voice in this channel
- Wrong approach: 7 mediocre samples mixing channels and quality levels
The samples should match the channel you’re generating for. LinkedIn posts for LinkedIn tasks. Cold emails for cold email tasks. Not generic prose. The model picks up channel conventions from the samples, not from instructions.
3. The banned-word list (the 2026 consensus)
Across the practitioner literature (Will Francis, Jodie Cook, Ruben Hassid, Conor Bronsdon’s avoid-ai-writing skill), the same patterns surface. The current consensus list:
Maximum frequency rules:
- Em dashes: maximum one per response (zero in tweets and email subject lines)
- The word just (overused as an apology softener)
- The word really (intensity without specificity)
Generic transitions — never use:
- “In fact,” “Indeed,” “Absolutely,” “Clearly”
- “First and foremost,” “As a result,” “Therefore”
- “For example,” “In other words,” “Notably,” “Importantly”
Summary clichés — never use:
- “In summary,” “To sum up,” “In conclusion”
- “All in all,” “At the end of the day”
- “It’s worth noting that…”
Structural patterns — never use:
- “It’s not just X — it’s Y”
- “Not only X, but Y”
- “No X. No Y. Just Z.”
- “Bold term: explanation sentence” bullet lists (the single most recognizable AI pattern, per multiple sources)
False-directness markers — never use:
- “Honestly?”, “Here’s the breakdown”
- “Here’s the part most people miss,” “Let me be direct”
- “The truth is…”
AI-vocabulary words — replace or cut: landscape, robust, seamless, paradigm, streamline, empower, foster, utilize, ascertain, endeavor, vibrant, nestled, thriving, delve, dive into, navigate, harness, leverage.
If a writer adopts only one item from this page, ban the “Bold term: explanation sentence” list format. It’s the single most identifiable AI pattern at the algorithmic level, and removing it changes the readability of an AI-drafted page more than any other single edit.
4. Lived-experience anchors (the under-prompted technique)
This is the most powerful and least-discussed technique. The instruction:
“Every paragraph must include at least one specific concrete anchor — a named person, dated moment, place, number, or sensory detail. If you can’t make it specific, cut the paragraph.”
This single rule moves AI prose toward human voice faster than any banned-word list because it addresses the underlying cause (absent lived experience) rather than surface symptoms (vocabulary). The mechanism: forced specificity prevents the model from defaulting to abstractions, and abstraction-density is the single biggest fingerprint stylometric detectors lock onto.
There’s a forcing-function side-effect too: writers who require concrete anchors get exposed when they’re asking AI to generate content they don’t actually know. If you can’t provide a specific named example, the post probably shouldn’t exist — and audience-mode review would have caught that anyway. The anchor requirement surfaces it at the drafting stage instead.
Examples of the anchor pattern in practice:
| Abstract (AI-default) | Concrete (anchor-driven) |
|---|---|
| “Many companies struggle with attribution" | "A $20M DTC brand I worked with last quarter spent 6 weeks rebuilding their attribution after iOS 14.5 broke their Facebook data" |
| "AI is transforming customer service" | "Intercom Fin reaches 86% resolution rate on inbound support tickets at production scale" |
| "Cold email is harder in 2026" | "AI-written emails get 71% inbox placement vs 86% for hand-written — that 15-point gap compounds across a 5-touch sequence” |
The anchor pattern is what makes case-study content read as human even when AI-drafted. The anchors are real; the model is just composing around them.
5. Voice-from-audio (the dictation route)
A different path entirely. Instead of teaching the AI to imitate your written voice, speak the draft and have AI transcribe. SuperWhisper, Wispr Flow, and Apple Dictation in 2026 all support this workflow:
- You speak the draft (or speak bullet points and key claims)
- AI transcribes with grammar/punctuation correction
- You optionally pass the transcript through a light editing prompt to clean filler (“um”, “like”, etc.) without changing voice
The voice is yours by construction. There’s no imitation step. The cadence is your spoken cadence; the word choice is your active vocabulary; the rhythm is your natural rhythm.
Karpathy specifically mentioned SuperWhisper in the originating “vibe coding” tweet — he uses it because typing is the bottleneck, not the AI. The same logic applies to writing: typing is the bottleneck on voice production. Speaking unlocks vocabulary and pacing that don’t surface when you’re typing.
When this works best:
- Social posts where conversational voice is the goal anyway (LinkedIn, X, TikTok scripts)
- First drafts of cold-email openers
- Quick replies that need to sound like you
- Long-form drafting where the dictation produces 2,000 words you then edit
When this works less well:
- Highly structured content (proposals, decks, technical specs)
- Content where the writer needs to do real thinking on the page rather than transcribing existing thinking
- Channels where the standard voice is not conversational (academic, legal, compliance)
The hybrid: dictate the core claims and the voice-bearing paragraphs; let AI structure the rest around them.
6. Audience-mode review (the post-generation discipline)
Documented in detail in marketing/ai-tells-in-sales-copy as the audience-mode review beat. Same discipline, same mechanics. The summary: after drafting, re-read the post as the concrete target audience would read it. Two modes catch different things — writer-mode catches argument flaws; audience-mode catches tells, em-dash overuse, rhythm overdensity, and absent specifics.
For social and outreach specifically, the activation prompt is channel-specific:
- “Read this as a founder scrolling LinkedIn between two meetings” — for LinkedIn
- “Read this as a CMO reading their cold-email-inbox at 7am before standup” — for outreach
- “Read this as a TikTok viewer in the first 3 seconds, thumb hovering over the swipe gesture” — for short-form video scripts
The named-audience specificity matters. “A potential customer” doesn’t activate the right heuristics. “A founder between meetings” does.
The 80/95–5/20 hybrid ratio (the empirical pattern)
The single most consistent finding across the 2026 case-study literature: every successful implementation at scale uses AI for 80–95% of content generation and humans for 5–20%. Zero documented case studies of 100% AI-unreviewed content working at scale.
Adore Me (DTC lingerie, internal voice: “spirited wing woman” — light, fun, Gen Z–tuned): AI cut stylist note-writing time by 36% and reduced product description generation from 20 hours to 20 minutes per batch. Brand voice maintained through hybrid review. (Source: Atom Writer brand voice case studies.)
Unilever (multi-brand, multi-market): Built a multi-layered AI content stack — “Alex” (GPT API, email drafting) and “Homer” (proprietary neural network, Amazon product listings). Each branded layer is domain-specific; human-edit step is preserved at every layer.
The structural reason the 80/20 pattern works: the human edit step intentionally produces mixed-signal text — neither pure AI (detectable) nor pure human (slow). The mix is what stylometric detectors can’t lock onto. Per the academic literature, detection accuracy drops significantly when authors change tone, genre, or use AI assistance. Hybrid is the production-scale exploit of that limitation.
Practitioner heuristic: AI drafts; human edits the openers, the anchors, and the closes. The middle of long-form content is the safest territory for AI to handle without edit. Openers (first 1–3 sentences) and closes (last 1–2 sentences) carry the most voice weight and the most distribution weight — those are where the human edit pass concentrates.
Platform-specific applications
LinkedIn (under 360Brew)
What works:
- Saves > likes in the ranking signal. Engineer for savability: frameworks, checklists, named patterns, concrete process descriptions. A 50-save post outperforms a 500-like post for algorithmic push.
- Quality comments > volume comments. 360Brew weights substantive replies more than reactions. Posts that spark thoughtful disagreement get pushed harder than posts that get “great post 🔥” comments.
- Topic consistency. Pick 2–3 core professional topics and stay in them across 4–8 weeks minimum. 360Brew connects content to profile context; topic-scattered profiles get less push than topic-consistent ones.
- Specific numbers in the first 1–2 sentences. Concrete anchors (per technique #4) activate the “this is real” signal before the algorithm or the human reader makes a stop/continue decision.
What demotes:
- Generic openers (“In today’s fast-paced world…”, “I want to share a quick story…”)
- Uniform paragraph length (telltale AI structure)
- “Bold term: explanation sentence” lists (the single most demoted format)
- “Engagement bait” formats (laundry-list-of-tips, “what would you add?”, “agree?”)
- Em-dash density above ~1 per post
The 4–5 hour weekly minimum: real LinkedIn growth in 2026 requires sustained time investment. Half-effort AI-driven posting at scale loses to focused, anchor-driven, 3-posts-per-week from a writer with topic consistency.
X / Twitter (under Grok ranking)
What works:
- Sharp specific opinion — Grok-powered ranking reads sentiment; opinion-led posts outperform neutral information posts in engagement weight
- One concrete number or named entity in the post (the anchor pattern in 280 characters)
- Concision — text-only posts beat video by ~30%, and shorter text outperforms longer in pure reach
- Threads where each post adds something specific — the algorithm rewards thread depth when each post is substantive
- No em dashes in tweets (visible immediately as AI marker in such short form)
- No external links in the main tweet — 30–50% reach penalty; put links in replies if needed
What demotes:
- Generic transitions in threads
- Repeated structural patterns across posts (parallel construction across tweets in a thread)
- “Engagement bait” — likes don’t move the needle anymore; bookmarks and replies do
TikTok and Instagram (script + caption)
The exemption is huge: AI-assisted script writing, hook generation, and caption writing are explicitly exempt from TikTok’s 2026 AI-labeling requirements. Only synthetic visuals and synthetic voices require labels. This means written-side AI use is unconstrained — but engagement is still gated by quality.
What works:
- The hook in the first 3 seconds. Completion rate above 65% is the threshold where high-quality AI content performs within 5–8% of human-equivalent reach.
- The script must read aloud naturally. Read every draft aloud before recording. Sentences that scan on the page but stumble in speech die in the first 3 seconds.
- Specific anchored claims — same anchor pattern as written content, voiced. “Last Tuesday I watched a client lose $40K in 6 hours” outperforms “clients lose money fast.”
- Native TikTok cadence — short sentences, conversational pace, dropped articles. Few-shot examples from actually-performing TikToks in your niche.
What demotes:
- Low-quality AI content underperforms human-equivalent by 30–45% on reach
- AI-generated scripts that don’t read aloud naturally — the cadence is wrong, the hook lands flat
- Generic visuals paired with AI-generated voiceovers (these do trigger the C2PA label)
Cold email outreach
The deliverability split is the single most actionable finding. Strategies:
For the opener:
- Lead with a verified intent signal — a recent, specific, named action by the recipient (a post they shared, a hiring move, a product launch). This is the anchor at the email scale. Generic “hi, I noticed you work in marketing” triggers spam filters and reader rejection simultaneously.
- 50–80 words body, max. AI drafts run long; tightening on edit is the universal fix.
- One specific ask. Not “let me know if you’d be open to chatting” — “would 15 min Tuesday or Wednesday work to walk you through how Brand X cut their CAC 22% in 60 days?”
- Signature that sounds like a person — first name only, no titles stack, no email-template separators
For the deliverability problem:
- AI emails get spam-flagged at 8% vs 3% human. Inbox placement: 71% vs 86%. The fix is at the infrastructure layer (warming up domains, rotating IPs, sender-rep management) AS WELL AS the content layer.
- Verified intent signals push AI campaigns to 8% reply rate (vs. the 4.1% AI baseline). Hybrid AI+human-edit reaches 4.4%+; full personalization with verification can hit 8–20%.
For the follow-up:
- AI averages 1.6 exchanges; human averages 3.7. The follow-up is where AI dies, not the opener.
- The fix: hand-write replies even when the opener was AI-drafted. The opener is the part that scales; the reply is the part that converts. Don’t AI the conversion step.
The two-principle frame applies here too
Per marketing/ai-tells-in-sales-copy, client-facing copy converges on two principles. They apply identically here:
- Don’t sound like AI. The six techniques above plus the eleven-pattern audit catalog from the sister page operationalize this.
- Model the reader’s motivation before drafting. A LinkedIn post for marketing operators is structurally different from one for founders, even on the same topic. A cold email to an iGaming operator (reach scarcity is their pain) is structurally different from one to a DTC CMO (paid CAC fragility is their pain) — even when both are about “marketing ROI.” (The iGaming reach-scarcity reframe is documented in marketing/ai-tells-in-sales-copy as the canonical example.)
No amount of voice polish recovers from an argument-level mismatch with the audience’s structural reality. Reader-motivation modeling is the upstream discipline; voice-techniques are the downstream tactics.
Common misconceptions
-
❌ Myth: “If I just edit the AI output enough, it stops sounding like AI.” ✅ Reality: Heavy editing of bad AI output produces neither AI text nor your voice — it produces stilted hybrid prose. Better: spec the voice up front (technique 1 + 2) and edit lightly, OR use voice-from-audio (technique 5) and edit for structure.
-
❌ Myth: “AI-detection tools are unreliable, so detection doesn’t matter.” ✅ Reality: The detection layer that matters in 2026 isn’t third-party AI-detector SaaS. It’s the platforms themselves (LinkedIn 360Brew, X Grok ranking, Gmail spam filters). These run continuously, are tuned to their own data, and act on detection without notifying the publisher.
-
❌ Myth: “If I write the prompt carefully enough, I won’t need to edit.” ✅ Reality: Zero documented case studies of 100% AI-unreviewed content working at scale. The 80/95–5/20 hybrid is universal. Plan for the edit step; design the workflow around it.
-
❌ Myth: “I need to ban em dashes specifically — that’s the AI tell.” ✅ Reality: Em-dash overuse is a tell. The deeper tell is absent lived experience — abstraction-density, not punctuation. Em-dash bans help; specific named anchors help much more.
-
❌ Myth: “Few-shot examples are for technical prompting, not creative writing.” ✅ Reality: Brand voice is exactly where 2–3 high-quality examples beat hours of description-writing. Examples carry vocabulary, cadence, structure, and channel conventions simultaneously. Descriptions carry only abstractions.
Honest limits
- The techniques work in combination, not in isolation. A voice-profile document without lived-experience anchors produces consistent but abstract output. Anchors without a voice profile produce specific but voice-inconsistent output. The 80/20 hybrid requires all techniques applied; partial implementations get partial results.
- Model drift over time. Banned-word lists from 2024 are partially obsolete — models have updated their defaults. The 2026 consensus list above will need maintenance every 6–12 months.
- Platform algorithms are moving targets. LinkedIn 360Brew was deployed March 12, 2026. X’s ranking changes with each Grok update. TikTok’s labeling rules expanded twice in 2025–2026. The tactics in this page are calibrated to mid-2026; expect them to shift.
- Voice authenticity doesn’t substitute for substance. Perfect human voice on bad arguments still loses. The two-principle frame (sound like a human + model the audience’s structural pain) requires both; either alone fails.
- The 80/20 ratio is empirical, not prescriptive. Some channels and some writers tolerate more AI; some less. The principle is non-zero human edit at scale, not the specific 80/20 split.
Related
- marketing/ai-tells-in-sales-copy — Sister page. That one is the editing/audit side (11-pattern catalog + two-principle frame + CMO-believability score); this page is the prompting/generation side (six techniques + platform-specific tactics)
- marketing/brand-voice-skills-guide — The LLM-instruction implementation side (SKILL.md architecture, 5-Example Method); this page is the techniques-and-empirics layer that sits underneath
- glossary/honest-assessment — Positive trust signal counterpart at the content-trust layer
- glossary/recognition-primed-decision — Klein-Kahneman explains why operators and platform algorithms detect AI patterns faster than conscious analysis (high-validity environment + rapid feedback = reliable pattern-matching)
- glossary/vibe-coding — Karpathy’s term for the same posture-vs-stakes problem in software. The voice-from-audio technique (#5) is the writing-side analog of vibe coding’s “talk to the AI” workflow
- marketing/social-commerce-psychology — Why specific anchors trigger trust at the cognitive level
- marketing/discovery-before-scale — Validation-before-volume discipline at the channel-pattern layer; this page is the same shape at the voice-prompting layer
- strategist-pattern — The wiki-as-substrate pattern uses the voice-profile-document approach (technique 1) at the strategist-discipline layer
Key Takeaways
- AI-detection is a distribution constraint in 2026, not a stylistic concern. Every major platform algorithmically demotes AI-pattern content; cold-email deliverability has bifurcated (71% AI vs 86% human inbox placement). Voice = reach.
- The 80/95–5/20 hybrid ratio is empirically universal. Zero documented case studies of 100% AI-unreviewed content working at scale. Plan the edit step into the workflow.
- Six techniques produce human voice when applied together: voice-profile document grounded in what you reject, 2–5 few-shot examples (quality over quantity), banned-word list, lived-experience anchors (most powerful, least-prompted), voice-from-audio dictation (voice is yours by construction), audience-mode review.
- The single most leverageable technique is lived-experience anchors — every paragraph requires a specific named anchor or it gets cut. This addresses the underlying detection mechanism (absent lived experience) rather than surface symptoms (vocabulary).
- The single most demoted AI pattern is the “Bold term: explanation sentence” bullet list. If you only ban one structure, ban this.
- Em dashes max 1 per response. GPT-4.1 produces them at 3.28× the human rate.
- Platform-specific tactics matter: LinkedIn rewards saves over likes (50 saves beats 500 likes), X rewards replies and bookmarks (likes are vanity), TikTok exempts AI-script writing from labeling but penalizes low-quality AI scripts 30–45% on reach, cold email needs hand-written follow-ups (AI averages 1.6 exchanges vs 3.7 for human).
- The two-principle frame from marketing/ai-tells-in-sales-copy applies identically here: don’t sound like AI + model the reader’s motivation before drafting. Voice polish doesn’t recover from argument-level mismatch.
Sources
Detection science & stylometry:
- Stylometric comparisons of human versus AI-generated creative writing (Nature 2025)
- Feature-Based Detection of AI-Generated Text
- StyloAI: Distinguishing AI-Generated Content with Stylometric Analysis
- Stylometry recognizes human and LLM-generated texts in short samples (ScienceDirect)
- AI Detection in 2026 (UndetectedGPT)
- Why Did LLMs Steal Our Em-Dashes? (McGill OSS)
- The Em-Dash Myth: What Actually Gives Away AI Writing (Duey.ai)
Empirical performance studies:
- AI vs. Human Email Writing: 10,000 Emails Study (Prospectory)
- Cold Email Benchmark Report 2026 (Instantly)
- AI SDR Real Performance: 100K Email Analysis 2026 (Digital Applied)
- Brand Voice AI Case Studies: Adore Me, Unilever, others (Atom Writer)
Platform-specific (LinkedIn / X / TikTok):
- Optimize LinkedIn 2026 Algorithm — 360Brew (upGrowth)
- What Is LinkedIn 360Brew? Strategy, Signals, and What’s Confirmed (Essey Marketing)
- LinkedIn 360Brew and the New Physics of Visibility
- How the Twitter/X Algorithm Works in 2026 (Source Code)
- X (Twitter) AI Content Alerts: Pre-Share Warning Explained (2026)
- TikTok AI Generated Content Policy and Labeling Requirements in 2026
Prompting techniques & practitioner sources:
- Few-Shot Prompting: Everything You Need to Know in 2026 (mem0)
- Few Shot Prompting Guide (PromptHub)
- How to Make Claude Sound Like You: Complete Guide for 2026 (My Writing Twin)
- How to Stop Claude Writing Like an AI (Will Francis)
- The Claude Skills That Finally Made AI Write Like Me
- Ruben Hassid — “I am just a text file”
- Ruben Hassid — Taste Interviewer prompt
- Jodie Cook — How to write with ChatGPT without it sounding like ChatGPT
- How to Stop AI Em Dashes (Briskly)
- avoid-ai-writing skill (Conor Bronsdon, GitHub)
- SuperWhisper — voice dictation for macOS, Windows, iOS
- Karpathy’s originating vibe-coding tweet (mentions SuperWhisper)
Wiki cluster connections:
- The two-principle frame is documented in marketing/ai-tells-in-sales-copy and applies identically to social and outreach
- The voice-profile document approach in technique #1 is the writing-side analog of the strategist-pattern’s CLAUDE.md-as-substrate approach (strategist-pattern)
- The voice-from-audio technique (#5) connects to glossary/vibe-coding — Karpathy’s same “talk to the AI” workflow shape applied to writing rather than software