TL;DR: LLM sessions are ephemeral — when context fills up or you close the window, everything learned is gone. Session handoffs are structured documents that bridge this gap, giving your AI assistant persistent memory that survives across sessions, tools, and team changes.
Who this is for: Anyone using AI coding assistants who's frustrated by re-explaining context every session. Works with Claude, Kiro, Cursor, or whatever comes next.
Part 2 of the Designing AI Teammates series. Part 1 covered onboarding an AI assistant using durable documentation. This article covers the next step: making that knowledge persist across the inevitable session boundaries.
The Problem
You've had a productive 3-hour session with your AI assistant. You've:
- Debugged a tricky race condition
- Decided on an architectural approach after discussing tradeoffs
- Learned that a particular library has quirks the AI now understands
- Built shared context about why certain code looks the way it does
Then the session ends. Maybe you hit the context limit. Maybe you closed your laptop. Maybe it's just tomorrow.
Everything is gone.
The next session starts fresh. The AI doesn't remember the race condition, the architectural decision, the library quirks, or the context you built. You're back to explaining basics.
This is the fundamental limitation of LLM sessions: they're stateless by design.
What About Built-In Memory Features?
Modern AI tools are adding memory and context management features:
Claude Code's /compact command summarizes the conversation to free up context space. It helps you work longer within a single session.
Built-in AI Memory (in various tools) persists facts across conversations. "Remember that I prefer TypeScript" carries forward.
Cursor and similar tools maintain some conversation history and context awareness.
These help. But they share limitations:
| Feature | What it solves | What it doesn't solve |
|---|---|---|
| Compacting | Extends single session | Doesn't survive session end |
| AI Memory | Persists preferences | Opaque — you can't see/search what's stored |
| Tool history | Recent conversation access | Locked to that specific tool |
The core problem remains: Your hard-won context is locked in a black box you don't control, can't search, and can't take with you.
What happens when you switch from Claude to Kiro? When you upgrade accounts? When a teammate needs to pick up where you left off?
The institutional memory disappears.
The Solution: Session Handoffs
Session handoffs are structured documents that capture session context in a format that's:
- Human-readable — You can review what was captured
- AI-readable — The next session can load and understand it
- Searchable — Grep works. Find that decision from 3 weeks ago.
- Tool-agnostic — Works with any AI assistant, current or future
- Version-controlled — Lives in git with your code
At the end of each work session, the AI creates a handoff document. At the start of the next session, the AI reads the most recent handoff. Context is restored in seconds.
What Gets Captured
A session handoff includes:
What Was Accomplished
Completed tasks and decisions made. Not just "fixed bug" but "fixed race condition in WebSocket reconnection by adding a mutex lock — decided against debouncing because we need guaranteed delivery."
Current State
Where things stand right now. Any running processes, partial implementations, or work in progress. "Auth flow is 80% complete — login works, logout not yet implemented, token refresh untested."
Lessons Learned
What worked, what didn't, mistakes to avoid. "Tried using library X for date parsing — it doesn't handle timezones correctly. Switched to library Y." This prevents the next session from making the same mistakes.
Next Steps
What needs to happen next. Clear, actionable items. "1. Implement logout endpoint. 2. Add token refresh logic. 3. Write tests for auth flow."
Key Files Modified
Quick reference for the next session. The AI can read these first to rebuild context efficiently.
Blockers
Unresolved problems that need attention. "Waiting on API credentials from third-party vendor. Can't test payment flow until received."
When to Create a Handoff
Approaching context limits: Claude Code will notify you when context is filling up. This is your cue — capture context before it's compacted or lost.
Natural stopping points: Completed a feature? Reached a milestone? Good time to checkpoint.
Before switching tasks: About to pivot from backend work to frontend? Capture where you left off so you can return cleanly.
End of work session: Closing your laptop for the day? Future-you (or future-AI) will thank you.
Before major decisions: About to choose between two architectural approaches? Document the tradeoffs you've discussed so you don't re-debate them next session.
The Workflow
Ending a Session
When it's time to wrap up:
- AI creates the handoff — Based on the session's work, Claude drafts the document
- Human reviews — Quick sanity check that key points are captured
-
File gets saved — To
docs/session_handoffs/YYYYMMDD-HHMM-brief-description.md - Committed to git — Now it's versioned and backed up
Starting a New Session
-
AI checks for handoffs — Looks in
docs/session_handoffs/for the most recent - Reads and acknowledges — "I see from the last session that you completed X and next steps are Y"
- Work continues — No re-explanation needed
The transition takes seconds instead of the 10-15 minutes of re-establishing context manually.
Short-Term Benefits
Immediate context restoration. No more "let me explain the codebase structure again." The handoff contains it.
Continuity on complex tasks. Multi-day features don't lose momentum. Each session picks up exactly where the last ended.
Reduced cognitive load. You don't have to remember everything to tell the AI. The handoff remembers.
Better compacting outcomes. When Claude does compact mid-session, the handoff supplements what might be lost in summarization.
Team handoffs. Someone else needs to pick up your work? Point them to the handoff. Works for human teammates too.
Long-Term Benefits
This is where session handoffs differentiate from built-in memory features:
Searchable Project Archeology
Three weeks from now: "When did we decide to use PostgreSQL instead of MongoDB?"
grep -r "PostgreSQL" docs/session_handoffs/
Found. The handoff from January 15th documents the decision and reasoning.
Built-in AI memory can't do this. You can't grep Cursor's brain.
Decision Documentation
Handoffs capture not just what was done but why. The "Lessons Learned" section preserves reasoning that would otherwise evaporate.
Six months later, someone asks: "Why does this code handle timezones so weirdly?"
The handoff explains: "Library X doesn't handle DST transitions. This workaround was added after 3 hours of debugging. Don't 'simplify' it."
Survives Tool Changes
Using Claude today. Codex 44 releases tomorrow and you want to try it. Your session handoffs work with any tool — they're just markdown files.
Your institutional memory isn't locked in a vendor's black box.
Survives Team Changes
New developer joins the team. New AI assistant starts fresh. The handoffs provide onboarding context for both.
Pattern Recognition
Over months, handoffs reveal patterns:
- Which areas of the codebase cause the most trouble?
- What decisions keep getting revisited?
- Where do sessions tend to get stuck?
This is organizational learning that no individual session could provide.
The Template
# Session Handoff - [Date] [Time]
## What We Accomplished
- [Completed task with context on approach taken]
- [Decision made and why]
## Current State
- [Where the work stands]
- [Any running processes or partial implementations]
## Lessons Learned
- [What worked well]
- [What didn't work / mistakes to avoid]
## Next Steps
- [ ] [Clear, actionable next task]
- [ ] [Another task]
## Key Files Modified
- `path/to/file.ts` - [what changed]
- `path/to/other.ts` - [what changed]
## Blockers / Open Questions
- [Unresolved problems]
- [Questions that need answers before proceeding]
Save to: docs/session_handoffs/YYYYMMDD-HHMM-brief-description.md
Example: docs/session_handoffs/20250112-1430-auth-flow-implementation.md
Making It Automatic
The real power comes when you don't have to remember to do this.
In Part 1, we described how Mother CLAUDE contains instructions that the AI follows automatically. Session handoffs are part of those instructions:
At natural stopping points, initiate a session handoff. Don't wait for the human to ask. You're a team member, not just a tool.
When context is filling up, Claude proactively says: "We're approaching the context limit. Let me create a session handoff before we continue."
The human's job shifts from "remember to create handoffs" to "review the handoff Claude created." Much easier cognitive load.
Handoffs vs. Compacting: Complementary, Not Competing
A common question: "If Claude can compact the conversation, why do I need handoffs?"
They solve different problems:
Compacting is for within-session continuity. It summarizes to free up context space so you can keep working. It's automatic and invisible — you don't control what's kept or lost.
Handoffs are for cross-session continuity. They're explicit documentation of what matters, in a format you control and can search.
Best practice: Use both.
- Let compacting extend your session when needed
- Create handoffs at meaningful stopping points
- The handoff captures what compacting might lose
Common Objections
"This is overhead I don't have time for"
Creating a handoff takes 2-3 minutes. Re-establishing context without one takes 10-15 minutes. Every session.
More importantly: with the right setup, Claude creates the handoff. You just review it.
"I'll just remember where I left off"
You won't. Not the details. Not after a weekend. Not after switching between three projects.
And even if you remember, you have to explain it to the AI. The handoff does that for you.
"My AI tool has memory now"
Great — use it too. But:
- Can you search it?
- Can you see exactly what it stored?
- Will it work if you switch tools?
- Can a teammate access it?
Session handoffs answer yes to all of these.
"This is just documentation"
Yes. That's the point.
Documentation that's structured for AI consumption, lives with your code, and survives beyond any single session or tool. That's institutional memory.
Getting Started
Create the directory:
mkdir -p docs/session_handoffsSave the template: Copy the template above to
docs/session_handoffs/README.mdTell your AI: Add to your CLAUDE.md (or equivalent):
At session end or when approaching context limits, create a handoff
document in docs/session_handoffs/ using the template in that directory.
Start using it: At the end of your next session, ask: "Let's create a session handoff before we wrap up."
Next session: "Check docs/session_handoffs/ for the most recent handoff and pick up where we left off."
After a few sessions, it becomes automatic. The AI knows to check for handoffs at session start and offer to create them at session end.
The Core Insight
LLM sessions are ephemeral. Your institutional memory shouldn't be.
Built-in memory features help, but they're black boxes locked to specific tools. Session handoffs give you transparent, searchable, portable memory that you control.
The goal isn't to fight the stateless nature of LLM sessions. It's to build infrastructure that works with it — documentation that bridges the gaps and accumulates knowledge over time.
This is Part 2 of the Designing AI Teammates series. Part 1 covered onboarding with Mother CLAUDE. Part 3 will cover giving your AI assistant responsibility for initiating quality checkpoints.
This article was written collaboratively with Claude, using the session handoff process it describes that is embedded in the Mother CLAUDE ecosystem that serves as the scaffolding for all of our projects.
Resources
The session handoff template and Mother CLAUDE system are open source:
- GitHub: github.com/Kobumura/mother-claude
-
Template:
templates/session-handoff.md
Licensed under CC BY 4.0. Free to use and adapt with attribution to Dorothy J. Aubrey.
Top comments (5)
The timing of stumbling on this is incredible. I've been wrestling with this exact frustration of having to re-explain everything after a context limit or switching tools. I was literally sketching out a system to capture and transfer session memory between LLMs when I saw this. You've articulated the problem perfectly and built such a clean solution. Huge props for making it open source too. This is the kind of infrastructure that should be built into every AI tool.
Thank you so much! The "re-explain everything" loop was driving me crazy - I'd finally get my AI Assistant up to speed on a complex issue, we'd start ROLLING, hit the hard brick context wall, and then spend the first 20 minutes of the next session just recreating that context.
Add four cross-cutting repos in three languages that all need to play nicely together, and… yeah, you get the picture. I may have invented a few new choice words along the way 😅
I'd love to hear what you were sketching out for your system - and see if there are ideas that I should incorporate. The handoff template is still evolving and I'm always looking for ways to make the "context resurrection" more seamless.
Totally agree on it being built into AI tools! Like, how is this not done yet?!
Thanks so much for your reply and for sharing your work.
My focus has been on solving the triggering problem and finding a way to automatically capture the context at the right moment, like when a tab closes or a context limit hits, to make the process seamless. Seeing your handoff template is incredibly helpful for the structured output side of things.
I'm going to keep exploring this space, and I'll definitely be keeping an eye on how Mother CLAUDE evolves. Thanks again for making it open source.
Actually, that's partially solved on my end - and I just realized I hadn't included it in the open source repo (until just now)!
In my shared CLAUDE.md (the Mother CLAUDE, which each project references), I tell Claude when to create handoffs:
So Claude proactively suggests them - I don't have to remember. It's not true auto-detection of the context limit, but it shifts the burden off me having to remember to do boring things - things that definitely add value but are thoroughly unsexy. I just realized those triggering instructions weren't in the open-source Mother CLAUDE template - only in my private setup. Just pushed that fix so others adopting the system get the proactive behavior too.
I've poked around CLAUDE's hooks (docs.claude.com/en/docs/claude-cod...) and I think there's potential there - there's a SessionEnd hook that fires when a session ends. The catch is it only triggers on clean exits (like /exit or Ctrl+D). My usual situation is Windows deciding it's update time and just... closing everything. So that hook wouldn't save me there.
For now, I can just do /resume to pick up where I left off and trigger a handoff from there. But honestly, now I'm curious what other surprises are hiding in those hooks - might be time to really dig in and see what's possible!
Your comment ALSO got me thinking that I should really go back through my personal setup and see what else hasn't made it into the open-source repo yet. There might be some more articles in there too!
P.S. My roots are in open source - everything I learned as a baby dev came from the open source community. ❤️ I started out helping a local dog rescue build a site with PostNuke (a fork of PHPNuke - yes, I'm dating myself!). I ended up volunteering on the project and writing a bunch of mods to open it up to corporate/enterprise users who wanted to integrate with Microsoft Active Directory (I was young and dumb. 😄). I owe my entire career to open source - otherwise I would have had to become an attorney! So when you said "this is the kind of infrastructure that should be built into every AI tool" - that hit home. Sharing this stuff is just paying it forward.
That's awesome to hear, and thanks for pushing that fix. The proactive approach is exactly right. Good luck digging into those hooks, sounds like there's a lot of potential there. And that's a great open-source story. it's how the best tools get built. Cheers.