Vibe Coding — Building Software by Describing What You Want
Vibe Coding
TL;DR: Vibe coding is a style of software development where you describe what you want to an AI (Cursor, Claude Code, Cline, etc.) and accept whatever code it produces — without deeply reading or understanding it. The term was coined by Andrej Karpathy in February 2025. It works well for prototypes, personal tools, and learning. It works badly for production systems, security-sensitive code, and anything you’ll need to maintain at scale.
Origin
The phrase comes from a tweet by Andrej Karpathy (February 2, 2025):
“There’s a new kind of coding I call ‘vibe coding,’ where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard.”
The framing caught on quickly. By mid-2025 it was a recognized industry term, with both serious adoption (especially among non-engineers building real software) and serious pushback (from engineers concerned about the long-term consequences of code nobody understands).
Karpathy’s May 2026 update: the floor-vs-ceiling distinction
At Sequoia Capital’s AI Ascent in May 2026, Karpathy returned to the term with a substantial elaboration. The talk — From Vibe Coding to Agentic Engineering — clarified what vibe coding is and (importantly) what it isn’t:
“Vibe coding raises the floor. Agentic engineering raises the ceiling.”
The reframing matters because the 2025 discourse had collapsed two distinct disciplines into one phrase. Karpathy’s update separates them:
- Vibe coding is the accessibility story. The entry barrier to shipping working software has dropped to “can you articulate what you want clearly.” This is the part most non-engineers experience first — and it’s real, and it’s already changing who builds software.
- Agent engineering is the production discipline story. It’s the operational skill of coordinating multiple agents reliably and safely, bounding autonomy, designing for unpredictability, and verifying outputs the agent itself cannot reliably verify. See glossary/agent-engineering for the full treatment.
Vibe coding compresses the path from “I want X” to “I have a working prototype of X.” Agent engineering is what’s required when that prototype needs to handle ten parallel agent threads processing real client data with zero permitted errors. Both are Software 3.0 disciplines (LLM-as-runtime); they sit at different points on the floor-to-ceiling axis.
Karpathy’s Software 1.0/2.0/3.0 context
The talk situated vibe coding inside a three-era framing of software:
| Era | What’s programmed | Vibe coding’s role |
|---|---|---|
| Software 1.0 | Explicit code, written line by line | Doesn’t apply |
| Software 2.0 | Neural network weights, learned from data | Doesn’t apply |
| Software 3.0 | LLM behavior, programmed through prompts and context | Vibe coding is the accessibility-side instance of 3.0 |
Karpathy’s compact framing: “LLM became the computer, and prompt became the program.” In this view, vibe coding is what happens when “anyone who can articulate intent” becomes “anyone who can write a program” — because the program is a natural-language description.
The “you can outsource your thinking, but you can’t outsource your understanding” constraint
Karpathy’s other load-bearing quote from the talk frames the long-tail constraint vibe coding never escapes:
“You can outsource your thinking, but you can’t outsource your understanding.”
Thinking is generating possibilities — drafting code, exploring approaches, proposing fixes. The LLM can do that. Understanding is recognizing which possibilities matter and why — which is what makes “code that works on the happy path but leaks secrets in edge cases” different from “production-ready code.” Vibe coding is fine until the operator’s understanding is the constraint that fails. Then it isn’t.
This is the same constraint the wiki’s glossary/jagged-frontier empirically documents at the consultant level: AI helps inside the operator’s frontier and hurts outside it. For software engineering, the frontier is bounded by what the operator understands well enough to verify.
What it actually is
The defining feature isn’t the AI tool — it’s the posture of the human:
- You describe intent, often in natural language (“build me a dashboard that shows my Stripe revenue grouped by week”).
- The AI generates the code across whatever files and dependencies it judges necessary.
- You accept the output without reading it carefully. If it works, you keep going; if it doesn’t, you describe the problem and let the AI fix it.
- You stay at the level of the goal, not the implementation. The code is a means to an end you don’t need to understand.
This is a fundamentally different posture from earlier AI coding workflows like autocomplete (Copilot’s first generation), which still required the developer to read each suggestion and decide whether to accept it. Vibe coding pushes the human further from the code itself.
Why business professionals should care
For non-engineers, vibe coding is the most consequential shift in software accessibility since the spreadsheet. A marketer who can describe “build me a tool that scrapes my competitor’s pricing page weekly and emails me a summary” can now have that tool inside an afternoon, without learning Python.
The category of people who can build working software has expanded from “people who can program” to “people who can articulate what they want clearly enough.” That second category is much larger, and it includes most marketing operators, analysts, founders, and consultants reading this wiki.
The practical implications:
- Internal tools become trivial to build. Custom dashboards, automation scripts, ad-hoc data tools, browser extensions for personal workflows.
- The minimum viable startup gets smaller. A founder can prototype a product without hiring an engineer or learning to code.
- The cost of trying an idea collapses. The friction between “I wonder if X would work” and “I have a working prototype of X” used to be weeks; vibe coding makes it hours.
Where it works, where it breaks
The honest assessment splits cleanly:
Vibe coding works well for:
- Personal tools and one-offs — scripts, dashboards, scrapers, browser extensions. If only you use it and it’s easy to throw away, the costs of unread code don’t compound.
- Prototypes — proving an idea works before investing in a proper build. The vibe-coded prototype gets discarded or re-implemented properly later.
- Learning — using the AI’s code as a worked example of how to do something, then reading and understanding it after the fact.
- Internal tools with limited blast radius — marketing automations, data-pipeline glue, internal dashboards. The downside of a bug is usually inconvenience, not catastrophe.
Vibe coding breaks down for:
- Production software with users — code you don’t understand has bugs you can’t diagnose. When the system breaks at 2am, you have no mental model to debug from.
- Security-sensitive code — payment handling, authentication, data handling. AI-generated code can introduce vulnerabilities (SQL injection, exposed secrets, broken auth flows) that look fine to someone who isn’t reading carefully.
- Code you’ll maintain for years — code debt compounds. A vibe-coded codebase at month 1 is a different beast at month 12, when the AI’s earlier choices have shaped a structure nobody understands.
- Compliance-regulated software — finance, healthcare, anything with audit requirements. “I don’t know exactly what this code does” is not an acceptable answer to a regulator.
The honest risks
Three failure modes worth naming explicitly:
-
Confidence asymmetry. AI-generated code looks confident even when it’s wrong. A vibe-coded function can have subtle bugs the AI doesn’t notice, the user doesn’t read, and tests don’t cover. The bug ships.
-
Compounding architectural mess. Each vibe-coded change is locally reasonable; the cumulative architecture is often a tangled web. Debugging time grows non-linearly with codebase size for code nobody understands.
-
Skill atrophy and dependency. A non-engineer using vibe coding to build their first tool is gaining capability they didn’t have. An engineer using vibe coding to skip understanding is losing capability they used to have. Both are real; the second is the more concerning one for long-term software quality.
The defensible position isn’t “never vibe code” — that loses the genuine accessibility win. It’s “match the posture to the stakes.” Personal scraper? Vibe code freely. Customer-facing payment flow? Read every line, write tests, and treat AI output as a draft for review.
Tools that enable it
The 2026 vibe-coding stack is mostly:
- Claude Code (Anthropic) — agentic CLI that can read files, run commands, and edit a whole codebase from a high-level brief.
- Cursor — IDE forked from VS Code with deep AI integration, including Composer for multi-file edits and agent mode.
- Cline / Continue / Aider — open-source agentic coding tools, varying in how aggressively they push toward the full vibe-coding posture.
- Replit Agent / Bolt / V0 — platform-level vibe-coding environments aimed at non-developers; the AI builds and deploys the app end-to-end.
The trajectory is clear: tools are converging on the high-agency posture, where the human briefs and reviews rather than reads and edits.
Common misconceptions
-
❌ Myth: “Vibe coding is just lazy programming.” ✅ Reality: It’s a different skill — articulating intent precisely is hard, and judging AI output for correctness without reading it requires its own discipline. The lazy version produces broken software; the skilled version produces working software the operator doesn’t deeply understand.
-
❌ Myth: “Vibe coding will replace engineers.” ✅ Reality: It expands who can build software, but maintaining production-grade systems still requires people who understand the code. The work shifts (more review, more architecture, less typing) rather than disappears.
-
❌ Myth: “If the code works, the vibe-coded result is fine.” ✅ Reality: Code that works on the happy path can fail on edge cases, leak secrets, scale badly, or carry security holes. “It works” is a much weaker statement than it sounds.
Related concepts
- glossary/agent-engineering — Karpathy’s complement: the ceiling-raising discipline for production AI-agent work. Vibe coding is the floor; agent engineering is the ceiling.
- glossary/jagged-frontier — The empirical anchor (Dell’Acqua n=758) for the operator-understanding constraint. Karpathy’s “jagged intelligence” is the same insight from the model side.
- glossary/creative-is-new-targeting — the same automation-eats-execution shift, in performance marketing rather than software
- comparisons/strategy-vs-execution-ai — the cross-domain pattern (vibe coding is execution-layer automation; software architecture and product strategy stay human-leveraged)
- automation/ai-enablement-levels — vibe coding is “Level 4” autonomy in the maturity model
- tools/claude-cowork — the agentic Claude Code environment
- automation/ai-developer-tools-cases — empirical results from companies using AI coding at scale (Valeo: 35% of code AI-generated, etc.)
- marketing/ai-tells-in-sales-copy — the analogous posture-vs-stakes problem in client-facing writing: copy that works on the happy path can fail on edge cases the writer didn’t audit-read. Same trade-off shape, different artifact
- marketing/ai-human-voice-prompting — the writing-side voice-from-audio technique (SuperWhisper dictation as input) is the direct analog of Karpathy’s vibe-coding workflow at the prose layer. Talk to the AI; let the AI transcribe and structure. Voice is yours by construction
Key takeaways
- Vibe coding is AI-assisted software development where you describe intent and accept output without deeply reading the code. Coined by Andrej Karpathy, February 2025.
- The shift expands who can build working software — most marketing operators, analysts, and founders are now in that category.
- It works well for prototypes, personal tools, and one-offs. It breaks down for production systems, security-sensitive code, and long-lived codebases.
- Three honest risks: confidence asymmetry (AI looks confident when wrong), compounding architectural mess, and skill atrophy.
- The defensible posture: match the level of human review to the stakes of the code. Vibe-code freely on low-stakes work; read every line on high-stakes work.
Sources
- Andrej Karpathy on X (February 2, 2025) — the originating tweet that coined the term.
- Karpathy, A. (May 2026). From Vibe Coding to Agentic Engineering. Sequoia AI Ascent 2026 talk. YouTube. The May 2026 update where Karpathy explicitly separated vibe coding (floor) from agent engineering (ceiling) and situated both inside the Software 1.0/2.0/3.0 framing.
- Industry adoption since 2025 — Cursor, Claude Code, Replit Agent, V0 are the canonical tools that match the posture.
- Primores observation — the accessibility story is real for non-engineers; the production-risk story is real for engineers. Both are simultaneously true at different layers.