Memspan.ai: Portable Memory for LLMs


Over the last couple of weekends, I ran into a problem that had been quietly bothering me for a while.
I’ve used ChatGPT heavily for years. Over time, it accumulated a lot of useful context: how I work, what I’m building, preferences, projects, and long-running threads that make future conversations faster and more relevant.
Recently, I started spending more of my day-to-day time in Claude Code. Almost immediately, I felt the reset. None of that history came with me.
Every new session meant starting over. Reintroducing context. Re-explaining projects. Rebuilding working memory that already existed somewhere else.
That was the initial motivation behind memspan.
The Problem
LLMs don’t persist memory across sessions in any meaningful, portable way.
Some platforms offer memory features, but those memories are siloed. Your ChatGPT memory stays in ChatGPT. Your Claude context stays in Claude. Each tool ends up holding a partial view of who you are and what you’re working on.
The result is fragmentation. You’re building parallel relationships with multiple assistants, none of which can see the full picture.
My initial goal was simple: move my identity and long-term context from OpenAI into Claude Code so I wouldn’t be starting from zero again. Once I started working on it, it became clear the same approach could apply more broadly. This wasn’t just a Claude problem. It was a portability problem across LLMs and agent workflows.
The Approach
memspan is a file-based, portable memory system for LLMs.
There’s no database. No service. No backend. Just structured files that you own.
It extracts identity, saved memories, and selected conversation history from tools like ChatGPT, organizes them into Markdown and JSON, and lets you load that context into Claude Code sessions when needed.
Everything is opt-in. Nothing is automatic. You decide what context gets loaded and when.
The system is built around a few simple principles:
- File-based: Memory lives in plain files you can read, edit, version, and back up.
- Portable: Not tied to a single platform or vendor.
- Selective: You load only the context that makes sense for the session you’re in.
A Simple Memory Model
To keep things usable without exploding token usage, memspan uses a three-tier memory model:
Core identity
Always-available context about who you are, how you work, and how you prefer to communicate.
Project or domain memory
Session-selectable context tied to a specific project, framework, or area of work. You load this when you want deep continuity.
Historical archive
Longer conversation history stored outside the prompt and retrieved on demand when needed.
This structure keeps the day-to-day experience lightweight while still allowing full continuity when it matters.
How It’s Used
In practice, memspan fits into a few common workflows:
- For quick questions, you load just identity so responses are tailored but lightweight.
- For focused project work, you load identity plus project context.
- As you work, the assistant can detect notable information and offer to save it for future sessions.
- Over time, your memory archive grows organically rather than being rebuilt each conversation.
The goal is not unlimited memory. It’s continuity without losing control.
Implementation Notes
memspan is intentionally simple.
At its core, it’s a collection of scripts and conventions:
- Identity files stored as structured JSON
- Memories stored as dated Markdown entries
- Optional project files and conversation correlations
- A small wrapper script that loads selected context into Claude Code using
--append-system-prompt
It does not interfere with Claude Code’s native features and does not replace them. Context loading is explicit and session-scoped.
Status and Next Steps
This is version 0.1.0.
It solves the core problem I set out to address: making identity and long-term context portable across tools. There’s plenty of room to extend it further, including things like on-demand retrieval, semantic search, or different agent integrations, but those are secondary.
For now, the focus is keeping it understandable, hackable, and useful.
Closing
As LLMs become more embedded in daily work, continuity matters. Starting from scratch every session doesn’t scale, and vendor-locked memory creates more problems than it solves.
memspan is a small attempt to address that by keeping memory local, portable, and under your control.
The project is open source and available here:
https://github.com/ericblue/memspan
https://memspan.ai