$ man how-to/context-engineering-vs-prompt-engineering

Comparisonsbeginner

Context Engineering vs Prompt Engineering

Why the right information matters more than the right words


The Shift

Prompt engineering optimizes how you ask. Context engineering optimizes what information the model sees when you ask. That is not a subtle difference - it changes the entire approach to building with AI. In 2023-2024, prompt engineering was the skill. Craft the perfect instruction. Add chain-of-thought reasoning. Use few-shot examples. Phrase your request precisely and the model performs better. This worked when AI interactions were single-turn question-and-answer. In 2025-2026, the ceiling became obvious. Models got better at following instructions regardless of phrasing. A well-structured prompt and a casual request produce nearly identical output on modern models. The bottleneck moved from instruction quality to information quality. It does not matter how perfectly you phrase "refactor this module" if the model cannot see the module, its dependencies, or the coding standards it should follow. Context engineering is the response. Instead of optimizing the prompt, you optimize the context window - the total information the model processes before generating a response.
PATTERN

Prompt Engineering: What It Actually Means

Prompt engineering is the craft of writing instructions that produce desired outputs from language models. Core techniques include: Role assignment: "You are a senior TypeScript developer." Gives the model a perspective to reason from. Chain-of-thought: "Think step by step before answering." Forces the model to show its reasoning, which improves accuracy on complex tasks. Few-shot examples: "Here are three examples of the format I want..." Demonstrates the expected output pattern. Constraints: "Respond in JSON. Use only the provided data. Do not make assumptions." Bounds the output space. These techniques still work. They are still useful. But they operate at the instruction layer. They tell the model how to process information. They do not control what information is available to process.
PATTERN

Context Engineering: What It Actually Means

Context engineering is the practice of controlling what information the model sees, when it sees it, and how it is structured. Core techniques include: Dynamic context selection: Loading only the files, documents, and data relevant to the current task. Not everything - the right things. A refactoring task needs the target file, its imports, its tests, and the coding standards. Loading the entire repo adds noise that degrades performance. Memory architecture: Deciding what the model remembers across sessions. CLAUDE.md files persist project context. Handoff documents carry state between sessions. Knowledge bases provide domain reference. Each serves a different memory function. Context compression: Summarizing or structuring information to fit within token limits while preserving what matters. Raw logs are context-expensive. A structured error summary with file, line, and message is context-efficient. Tool and retrieval integration: Giving the model access to search, file reading, API calls, and databases so it can pull information on demand instead of needing everything preloaded. Schema and structure: Organizing context with clear headings, typed schemas, and predictable formats so the model processes it efficiently. Structured data outperforms unstructured data consistently.
PRO TIP

Why This Matters for Practitioners

If you are building AI features, automating workflows, or using AI coding tools daily, context engineering is the skill that moves the needle. A practical example: I run a monorepo with three Next.js sites, 40+ automation skills, and daily cron jobs. Every Claude Code session starts by reading a CLAUDE.md file that contains project structure, coding conventions, deployment steps, and safety rules. It reads the context handoff from the previous session. It loads relevant skills based on the task. This is context engineering. The model sees the right information before it writes a single line of code. The prompts I type are casual - "add a new how-to wiki entry about MCP servers" - because the context does the heavy lifting. The CLAUDE.md tells it the file format. The existing entries show it the pattern. The skills tell it the workflow. Compare that to prompt engineering alone: "You are an expert TypeScript developer. Please create a new how-to wiki entry in the HowToWikiEntry format defined in how-to-wiki.ts with id, title, subtitle, category, description, keywords array, difficulty level, canonicalSite, related entries, and sections array with heading, type, and content fields..." You are spending tokens on instruction that context handles automatically. The takeaway: prompt engineering is a technique. Context engineering is a system. Techniques hit ceilings. Systems compound.

knowledge guide
See "Agent" in Knowledge See "Context" in Knowledge

related guides
Repo as Context EngineConstraints and Context EnginesParallel-Safe Session HandoffsRules, Skills, and Context Files
← how-to wikiknowledge guide →
ShawnOS.ai|theGTMOS.ai|theContentOS.ai
built with Next.js · Tailwind · Claude · Remotion