

Some people call it prompting. Some call it vibe coding. Some call it AI pair programming (though these people are thankfully few and far between).
Whatever name you give it, the practice is the same: You write some kind of instruction, and the LLM writes some kind of code.
But here’s the difference between a one-off code snippet and a working product:
Context.
Context engineering is how we guide LLMs to produce consistent, useful, and safe code, on demand, across sessions, in the real world. Just like coding itself, context engineering can be casual, structured, or fully industrialised.
In this issue, we’ll walk through three modes of vibe coding - from scrappy to surgical - so you can choose the right one for your next project.
Please note none of what you read below is an endorsement of any particular platform or vendor, and we’re not paid by any of them for discussing them here.
1. App-Based: Pure Vibe (e.g. Lovable)
Welcome to the golden age of no-setup creativity.
Tools like Lovable and Notion AI are where vibe coding shines. You’re not worrying about file trees, config files, or constraints. You’re just:
Typing what you want
Seeing what comes back
Copy/pasting into wherever you’re building
There’s no structured context, no markdown files, and no pinned prompts. Claude or GPT is responding purely based on recent chat history and conversational tone.
It’s useful because it:
has Zero friction
is great for non-technical founders, indie hackers, or designers
is amazing for first drafts, user flows, and idea validation
Why it breaks:
No version control for your prompts
Drift over time
No ability to reuse or scale output style
solution architecture is extremely limited - DON’T SHIP TO PROD
It’s like jamming on a whiteboard—fluid, expressive, and quick—but don’t expect anyone to deploy from it.
Use it when: you’re prototyping solo, chasing ideas fast, or just need to “see something working.”
2. IDE-Based: Structured Vibes (e.g. Cursor, Windsurf)
Now we’re writing real code and keeping it in version control.
Tools like Cursor and Windsurf integrate models Claude and ChatGPT into your IDE. The context now includes:
The file you’re editing
The surrounding project tree
Any pinned prompts or injected .md instructions
Chat history from your dev sessions
This is where most developers live: coding with LLM assistance, inside real projects.
Why it’s great:
Context-aware output based on file structure
Inline editing, refactoring, and code completion
Good balance between speed and structure
Partial support for reusable prompt patterns (via pinned instructions)
Why it breaks:
Prompt quality still varies
No long-term memory across features
Difficult to reproduce work across different projects or devs
Limited operational reusability
It’s like having a junior engineer sitting beside you: quick, useful, but still occasionally lost.
Use it when: you’re building features, writing tests, or refactoring inside a repo that already has some shape to it.
3. CLI-Based: Professional Context Engineering (e.g. Claude Code)
This is where context engineering stops being a vibe and becomes a discipline.
With Claude Code (or Anthropic Console + Claude 3), you’re explicitly structuring context in .md
files that define:
What you’re building
The stack you’re using
Naming conventions
Output formatting
Constraints (e.g. no emojis, use British English, match ESLint rules)
Golden examples of prompts + completions
Your prompts become reusable building blocks. Your context becomes a source of truth. And the LLM? It becomes a modular assistant you can orchestrate across workflows.
Why it’s great:
High reproducibility and clarity
Ideal for teams, systems, and long-running projects
Enables URPs (Ultra-Rapid Prototypes)
Easy to version, diff, and review in Git
Why it breaks:
Requires setup time
Higher learning curve
Overkill for lightweight tasks or solo experiments
This is software-defined prompting. You’re not just jamming—you’re engineering.
Use it when: you’re building full-stack features, delegating to agents, or shipping production-adjacent prototypes with predictable output.
Bonus: You Can Mix and Match
The best context engineers don’t pick just one mode. They move fluidly between them.
Start with Lovable to sketch the idea
Switch to Cursor to scaffold components
Use Claude Code + context stack to build the backend and deploy
Each mode serves a purpose. Each one meets you where you are.
Caveat: some offerings (e.g. Codex from OpenAI) are available in web-app and IDE-based versions.
Structure the Vibe
The difference between hacking and engineering is intent.
Vibe coding is fun. It’s fast and creative. But when you add structure, when you bring in context, you get reusability, reliability, and real value.
So whether you're scribbling prompts in an app or orchestrating agents in a terminal: You’re not just coding with AI. You’re engineering context.
And that’s how real work gets done.
Want a ready-to-use Claude context stack template to start engineering your own projects? Reply with STARTER-STACK and I’ll send it over.
It’s only a vibe until it’s a workflow.