$ man context-wiki/grounding

Foundationsadvanced

Grounding

How to stop AI from guessing and start it executing


What Grounding Means

Grounding is how you stop AI from making things up. When Claude does not have enough context, it fills the gaps with plausible-sounding guesses. Sometimes those guesses are right. Often they are not. Grounding means giving the model enough real, specific information that it does not need to guess. It is the difference between "write me a cold email" (ungrounded, Claude invents your value prop) and "here is my ICP, my persona tiers, my messaging framework, and three examples of emails that got replies, now write the cold email" (grounded, Claude works from real data).
PATTERN

Three Core Practices

Context Sensitivity: Before you give Claude a task, ask yourself how hard this is and how much unique information the model needs. A simple reformatting task needs almost no context. Writing a partner onboarding workflow needs your full ICP, personas, and messaging. Match the context to the complexity. Assumption Enumeration: Make the model list what it is assuming before it executes. Ask Claude to state its assumptions explicitly. Implicit assumptions become explicit. Explicit assumptions become addressable. You catch the wrong ones before they pollute the output. Domain Knowledge Query: Give Claude the specific information it needs for the task. Do not make it guess your product positioning. Do not make it infer your ICP. Load the files. Point to the data. The model is only as good as what you put in the context window.
PRO TIP

How I Ground Claude

I ground Claude by loading my whole repo as context. My CLAUDE.md sets the defaults. My skills define the workflows. My rules files enforce patterns per file type. My voice guide controls tone. My partner folders contain ICP, personas, and messaging for each client. Claude does not guess because everything it needs is already in the context window. When I run /partneronboard, Claude reads the partner's research folder, extracts ICP criteria, builds qualification prompts, and scaffolds the entire workflow. It does not invent criteria. It reads the file. That is grounding in practice.
ANTI-PATTERN

When Grounding Fails

Grounding fails when you assume the model knows things it does not. Common failures: asking Claude to write in your voice without loading the voice guide. Asking it to qualify leads without loading the ICP definition. Asking it to follow a workflow without loading the skill file. Each of these forces Claude to guess. And Claude is very good at guessing confidently, which makes the failures harder to catch. If the output looks plausible but feels off, the problem is almost always missing context. Not a bad prompt. Not a bad model. Missing context.
FORMULA

The Grounding Checklist

Before giving Claude any non-trivial task, check these five boxes: 1. Does Claude have the domain-specific data it needs? (ICP, personas, product docs) 2. Does Claude know what voice or tone to use? (voice guide loaded) 3. Does Claude know the output format? (examples or schema provided) 4. Has Claude stated its assumptions? (ask it before it executes) 5. Is the task complexity matched to the context depth? (simple task = light context, complex task = full context) If you check all five, the output will be grounded. If you skip any, you are gambling on Claude's guesses.

knowledge guide
See "Context" in Knowledge

related entries
Context EngineeringContext RepositoryCLAUDE.mdSkills
← context wikiknowledge guide →
ShawnOS.ai|theGTMOS.ai|theContentOS.ai
built with Next.js · Tailwind · Claude · Remotion