$ man context-wiki/context-handoffs
Modes and Workflowsintermediate
Context Handoffs
How to pass context between sessions so agents never start from zero
What Context Handoffs Are
Claude Code sessions are stateless. Close the terminal and the context is gone. Every new session starts from zero. No memory of what you just built, what broke, what decisions you made, what is half-finished. Context handoffs fix this. At the end of every session, write a structured document that captures what happened, what changed, what still needs work, and what decisions were made. At the start of the next session, read that document first. Now the new session has full context of what just happened. Sessions stop resetting and start compounding. The handoff is not a nice-to-have. It is the difference between an AI that forgets everything and an AI that builds on yesterday.
ANTI-PATTERN
The Single-File Problem
The first version of context handoffs is always a single file. Something like ~/.claude/context-handoff.md. One file. Every session reads it on start, writes it on end. This works perfectly when you run one terminal at a time. It breaks the moment you open a second one. Terminal A finishes and writes its handoff. Terminal B finishes 30 seconds later and overwrites Terminal A. Terminal A's context is gone. You never notice until the next morning when a session picks up with half the context missing. This is a classic last-write-wins race condition. And it gets worse as you scale. I run 4 to 6 Claude Code terminals simultaneously. With a single handoff file, 3 to 5 sessions worth of context get silently destroyed every day. The system looks like it is working because you always see a handoff file. You just do not see what is missing from it.
PATTERN
The Parallel-Safe Architecture
The fix is simple. Stop overwriting one file. Write timestamped files to a directory instead. Each session writes to ~/.claude/handoffs/YYYY-MM-DD_HHMMSS_slug.md. The timestamp guarantees uniqueness. No two sessions can collide. On session start, read ALL unconsumed handoffs in the directory. Merge the context from every file. After reading, rename each file from file.md to file_done.md so it does not get read again. A cleanup job deletes consumed handoffs older than 7 days. The entire architecture is 4 operations: write a uniquely-named file, read all unread files, mark files as consumed, clean up old files. No database. No lock files. No coordination between sessions. Each session operates independently and the directory handles the merge.
CODE
The Flow
Session lifecycle with parallel-safe handoffs:
Session Start:
ls ~/.claude/handoffs/*.md (skip *_done.md)
--> read each unconsumed handoff
--> rename file.md to file_done.md
--> merge all context into current session
Session End:
--> write ~/.claude/handoffs/YYYY-MM-DD_HHMMSS_slug.md
--> never overwrite another session's file
Cleanup (periodic):
find ~/.claude/handoffs -name '*_done.md' -mtime +7 -delete
Parallel safety comes from the naming convention. Two sessions ending at the same second would need the same slug to collide. In practice this never happens because the slug describes the work and the timestamp is second-precise.
FORMULA
Structuring the Handoff Document
Every handoff document needs five sections:
1. Session summary. One paragraph. What was the goal and what happened.
2. What changed. Files created, modified, deleted. Specific paths.
3. What still needs work. Unfinished tasks, known bugs, next steps.
4. Key decisions. Architectural choices, tradeoffs, things the next session should not revisit.
5. Active context. Branch name, running processes, environment state, anything the next session needs to know about the current machine state.
Keep it factual. No commentary, no opinions, no narrative. The handoff is a status report, not a blog post. A session reading it at 6 AM does not need your feelings about the refactor. It needs to know which files changed and what broke.
knowledge guide
related entries