Skip to content

Experiments — Testing What Actually Works

Experiments

TL;DR: We test things before recommending them. This section documents our experiments — what we tried, what worked, what didn’t, and what we learned. No theory without practice.

Why We Experiment

AI moves fast. What worked six months ago might be obsolete. What sounds good in theory might fail in practice. We run experiments to:

  1. Validate approaches before recommending them to clients
  2. Generate real metrics — not hypothetical projections
  3. Document failures — knowing what doesn’t work is valuable
  4. Build case studies — experiments that work become client offerings

Our Method

Every experiment follows this structure:

PhaseWhat We Do
HypothesisClear statement of what we’re testing
SetupTools, data, constraints documented
ExecutionActually run the test
ResultsMetrics, outputs, observations
AnalysisWhat worked, what didn’t, why
VerdictRecommendation: adopt, adapt, or abandon

Cross-Cutting Patterns

Patterns that emerged across multiple experiments:

Context > Tools

Every experiment confirms: AI tools without your business context produce generic results. The differentiator is always the data you feed it, not the tool itself.

Honesty Signals

Counter-intuitive finding from SEO/GEO experiments: content that admits weaknesses gets cited more by AI engines. The glossary/honest-assessment pattern emerged from testing, not theory.

Human-in-the-Loop Required

No experiment produced “set and forget” results. AI generates drafts; humans verify and polish. The speedup is real (5-6x typical), but zero human time is not realistic.

Native > Translated

For non-English markets: generating directly in the target language beats translating. AI can match local idioms when prompted correctly.

Active Experiments

ExperimentStatusKey Finding
experiments/seo-geo-content-ecommerce🌿 CompleteAI articles are publish-ready with 15-30 min review
experiments/ad-alchemy-competitor-piggyback🌿 CompleteCompetitor creative analysis accelerates ad development
experiments/ai-visibility-ecommerce🌱 In progressLithuanian e-commerce AI citation rates vary widely

Experiments → Case Studies

When an experiment proves successful and we implement it for a client, it becomes a case study:

Propose an Experiment

Have something you want us to test? We’re always looking for:

  • New AI tools to evaluate
  • Workflow automation hypotheses
  • Content generation approaches
  • Competitor intelligence methods

Contact us: primores.org/contact