Migration Guide
Why migrate?
Section titled “Why migrate?”If you’re currently using Anthropic’s MCP Memory server, your knowledge graph lives in a single JSONL file. mcp-memory stores the same data in SQLite with proper indexing, concurrency support, and vector embeddings — but the data format is identical.
The migrate tool reads your existing JSONL file and imports every entity, observation, and relation into the SQLite database. It’s idempotent, fault-tolerant, and safe to run multiple times.
Source format: JSONL
Section titled “Source format: JSONL”The Anthropic MCP Memory server stores data in JSONL format — one JSON object per line. There are exactly two record types:
Entity records
Section titled “Entity records”An entity represents a node in the knowledge graph with a name, type, and a list of observations:
{"type": "entity", "name": "Session 2026-03-21", "entityType": "Session", "observations": ["Decision: build MCP Memory v2", "Uses FastMCP framework"]}Required fields:
| Field | Type | Description |
|---|---|---|
type | string | Must be "entity" |
name | string | Unique identifier for the entity |
entityType | string | Entity type (defaults to "Generic" if omitted) |
observations | string[] | List of observation strings (can be empty) |
Relation records
Section titled “Relation records”A relation connects two entities with a typed edge:
{"type": "relation", "from": "MCP Memory v2", "to": "FastMCP", "relationType": "uses"}Required fields:
| Field | Type | Description |
|---|---|---|
type | string | Must be "relation" |
from | string | Name of the source entity |
to | string | Name of the target entity |
relationType | string | Type of the relationship |
Example JSONL file
Section titled “Example JSONL file”A typical file mixes both record types. Entities should appear before the relations that reference them:
{"type": "entity", "name": "Project Alpha", "entityType": "Project", "observations": ["Started in March 2026", "Uses Python 3.12"]}{"type": "entity", "name": "SQLite", "entityType": "Technology", "observations": ["Used for persistence"]}{"type": "entity", "name": "FastMCP", "entityType": "Framework", "observations": ["MCP server framework"]}{"type": "relation", "from": "Project Alpha", "to": "SQLite", "relationType": "uses"}{"type": "relation", "from": "Project Alpha", "to": "FastMCP", "relationType": "built_with"}Migration process
Section titled “Migration process”The migration runs in four sequential phases:
Phase 1: Read and parse
Section titled “Phase 1: Read and parse”The JSONL file is read line by line. Each line is parsed as a standalone JSON object:
- If the line parses successfully, the record is classified as an entity or relation by its
typefield. - If JSON parsing fails (corrupt line, encoding issue), a warning is logged and the line is skipped. The line is counted in the
errorstotal. - Blank lines are ignored.
This phase is purely in-memory — no database writes happen yet. The parsed records are queued for the next phases.
Phase 2: Import entities
Section titled “Phase 2: Import entities”Each entity record is processed through upsert_entity, which uses SQLite’s ON CONFLICT(name) DO UPDATE:
- New entity: inserted into the
entitiestable. - Existing entity: observations are merged — only observations that don’t already exist are added. The entity type is updated if it differs.
This means you can safely run migration on a file that partially overlaps with data already in the database.
Phase 3: Import relations
Section titled “Phase 3: Import relations”Each relation record is validated before insertion:
- Both the
fromandtoentities must exist in the database. - All required fields (
from,to,relationType) must be present. - The relation must not already exist (enforced by a
UNIQUEconstraint on(from_entity, to_entity, relation_type)).
If any check fails, the relation is skipped and counted as skipped. If the relation is valid, it’s inserted via create_relation, which catches IntegrityError on duplicate constraints for safety.
Phase 4: Batch embedding generation
Section titled “Phase 4: Batch embedding generation”If the embedding engine is available (model downloaded — see Getting Started), embeddings are generated for all imported entities in a single batch at the end of the migration.
This is significantly more efficient than generating embeddings one-by-one during import. The batch approach:
- Groups all entity text into a single ONNX inference pass.
- Uses
INSERT OR REPLACEon therowid— existing embeddings are overwritten with fresh vectors. - If the embedding engine is not available, this phase is silently skipped. The migration still succeeds, and embeddings will be generated later when entities are accessed via
search_semantic.
How to run the migration
Section titled “How to run the migration”You have two options: via the MCP tool (recommended for most users) or via Python script (for programmatic access).
Option 1: Via MCP tool
Section titled “Option 1: Via MCP tool”If you’re using an MCP client (OpenCode, Claude Desktop, etc.), call the migrate tool with the path to your JSONL file:
{ "source_path": "~/.config/opencode/mcp-memory.jsonl"}The tool expands ~ to your home directory automatically. The path must point to an existing, readable file.
The tool returns a JSON result (see Result format below).
Option 2: Via Python script
Section titled “Option 2: Via Python script”For scripting or CI pipelines, import the migration function directly:
from mcp_memory.storage import MemoryStorefrom mcp_memory.migrate import migrate_jsonl
store = MemoryStore()store.init_db()
result = migrate_jsonl(store, "~/.config/opencode/mcp-memory.jsonl")print(result)The migrate_jsonl function accepts the same path with ~ expansion. It returns a dictionary with the same fields as the MCP tool response.
You can also run it as a one-liner from the repository root:
uv run python -c "from mcp_memory.storage import MemoryStorefrom mcp_memory.migrate import migrate_jsonlstore = MemoryStore()store.init_db()result = migrate_jsonl(store, '~/.config/opencode/mcp-memory.jsonl')print(result)"Idempotency
Section titled “Idempotency”The migration is idempotent: running it multiple times produces the same result without duplicating data. This is safe because every write operation includes a conflict resolution mechanism:
| Operation | Mechanism | Effect on repeated runs |
|---|---|---|
| Insert entity | ON CONFLICT(name) DO UPDATE | Existing entities are updated; new observations are merged |
| Add observation | Existence check before insert | Duplicate observations are discarded silently |
| Create relation | IntegrityError caught on UNIQUE constraint | Duplicate relations are ignored |
| Store embedding | INSERT OR REPLACE on rowid | Existing embedding vector is overwritten with a fresh one |
Error handling
Section titled “Error handling”The migration is designed to be fault-tolerant — it processes as many records as possible rather than failing on the first error:
| Error condition | Behavior | Counted as |
|---|---|---|
| Corrupt line (invalid JSON) | Skipped with warning | errors |
Entity without name field | Skipped | skipped |
Relation with missing fields (from, to, or relationType) | Skipped | skipped |
| Relation referencing non-existent entity | Skipped | skipped |
Unknown record type (not "entity" or "relation") | Skipped | skipped |
| Individual embedding failure | Logged as warning, migration continues | Not counted |
The migration never raises an exception for individual record failures. All issues are captured in the result counts.
Result format
Section titled “Result format”The migrate tool returns a JSON object with four fields:
{ "entities_imported": 32, "relations_imported": 37, "errors": 0, "skipped": 2}| Field | Type | Description |
|---|---|---|
entities_imported | int | Entity records successfully processed (inserted or updated) |
relations_imported | int | Relations actually created in the database |
errors | int | Lines that failed JSON parsing or threw unexpected exceptions |
skipped | int | Records skipped due to missing fields, non-existent target entities, or unknown type |
Post-migration verification
Section titled “Post-migration verification”After migration completes, verify your data was imported correctly:
1. Verify imported entities
Section titled “1. Verify imported entities”Use search_nodes to verify that entities were imported correctly:
{ "query": ""}2. Test semantic search
Section titled “2. Test semantic search”If you downloaded the embedding model, verify that embeddings were generated by running a semantic query:
{ "query": "your search term here", "limit": 10}If results come back with limbic_score and distance fields, embeddings are working. If you get an error about the model, see Getting Started for the model download instructions.
3. Spot-check specific entities
Section titled “3. Spot-check specific entities”Use open_nodes to verify individual entities by name:
{ "names": ["Session 2026-03-21", "Project Alpha"]}Confirm that observations were merged correctly and no data was lost.
Common scenarios
Section titled “Common scenarios”Migrating from a default Anthropic install
Section titled “Migrating from a default Anthropic install”The default location for the Anthropic MCP Memory JSONL file depends on your setup:
# OpenCode~/.config/opencode/mcp-memory.jsonl
# Claude Desktop (macOS)~/Library/Application Support/Claude/claude_memory.jsonl
# Claude Desktop (Linux)~/.config/Claude/claude_memory.jsonlPass the correct path to the migrate tool or Python function.
Migrating incrementally
Section titled “Migrating incrementally”If your JSONL file is still being written to (e.g., you’re running both servers in parallel during a transition period), you can run migration periodically. Because it’s idempotent, each run only imports new entities and observations that weren’t already present.
Large files
Section titled “Large files”The migration processes the file line by line — it never loads the entire file into memory. A file with thousands of entities and relations will import without issue. The batch embedding generation phase is the slowest part, but it processes embeddings in chunks rather than all at once.
- Next: Tools Reference — parameters, responses, and edge cases for all 10 tools
- Also see: Getting Started — installation, configuration, and first steps