• Temrel
  • Posts
  • 5 Context Engineering Commandments

5 Context Engineering Commandments

For using LLMs to generate code that works

When you’re using large language models to generate production-grade code, the difference between useful and useless often comes down to context.

This isn’t about magic prompt phrasing or clever hacks. It’s about building a repeatable system. One that teaches the model to behave like a reliable pair programmer — with less guesswork and more control.

Here are the five context engineering principles I use every day when coding using .md-based workflows for LLMs (right now it’s Claude Code).

  1. Change inputs, not outputs

  2. You wrote the code, not the machine

  3. Markdown to rise up

  4. Avoid adjectives and ambiguity

  5. Examples are everything

1. Change inputs, not outputs

When the model gets something wrong, the fix isn’t in the code block — it’s in the context that produced it.

  • Don’t edit the output by hand unless you’re creating a better example to feed back into the context.

  • Instead, fix the structure of your prompt file, your examples, or your constraints.

  • Treat each failure as a test case for your .md file — then go back and make it more explicit.

2. You wrote the code, not the machine

Always own your code. Read and understand it before you push. That way you’ll reduce mistakes getting into production. The LLM didn’t “get it wrong”. It did what it was told.

  • You’re the one who wrote the context, chose the examples, and set the expectations.

  • The model has no agency. If the output is off, it’s on you.

  • This mindset avoids magical thinking and puts you in control of the workflow.

3. Markdown to rise up

You can communicate with your LLM via command line/web app or .md files. Prefer Markdown for its iterability and ability to be committed with code, as command line context is ephemeral and hard to iterate on.

The LLM will perform far better if your .md is coherently structured (and other human beings, including future you, will be able to read it far more easily).

If you want consistent results, design your .md files the same way you’d design an API:

  • Use # and ## for structure. Label sections with intent: ## Output Format, ## Naming Rules, ## Tone.

  • Use bullet points for constraints. Claude interprets lists as rules.

  • Keep it clean, readable, and testable. You’re writing for the model and your future self.

4. Avoid adjectives and ambiguity

You need to write your context in the most objective and literal sense possible. Language models don’t “get the idea”. They follow patterns — often too strictly. Adjectives like ‘simple’, ‘robust’, as well as phrases like ‘attempt to’ and ‘aim to’ leave room for hallucination.

  • If a rule matters, say it early and say it clearly.

  • Avoid vague phrasing. “Try to” and “generally” invite inconsistency.

  • Be declarative. Be specific. Use short, command-like statements.

5. Examples are everything

LLMs learn in session—they pattern-match from what you’ve already shown them. Don’t be shy of copy-pasting code into the context.

  • One clear input/output pair teaches more than ten abstract instructions.

  • Format your examples like production code: readable, idiomatic, and correct.

  • Use real-world scenarios from your use case; avoid toy examples unless you're debugging.

  • For longer workflows, use few-shot chaining: multiple sequential examples that show process and reasoning.

  • Put your best examples early—Claude and other models prioritise what appears first in the context window.

If context is the contract between you and the LLM, then examples are the clauses that actually get enforced.

Instructions tell; examples teach.

Don’t rely on abstract rules or vague intentions—show the model exactly what success looks like. Whether you’re guiding code generation, response formatting, or decision logic, strong examples anchor the model’s behaviour far more reliably than adjectives or wishful prompting.

Write fewer rules. Give better examples. That’s the craft.

How useful was this newsletter?

Login or Subscribe to participate in polls.