A build system for
agent memory.

Conversations are sources. Prompts are build rules. Summaries and world models are artifacts. Change your approach without starting over.

uvx synix init Copied! Get Started

The problem

Agent memory hasn't converged. Mem0, Letta, Zep, LangMem — each bakes in a different architecture because the right one depends on your domain and changes as your agent evolves.

Synix lets you declare your memory architecture in Python, build it, then change it. Only affected layers rebuild. Trace any artifact back through the dependency graph to its source conversation.

Mem0 Letta Zep LangMem Synix
Approach API-first Agent-managed Temporal KG Taxonomy-driven Build system
Incremental rebuilds Yes
Provenance Full chain
Architecture changes Migration Migration Migration Migration Rebuild
Schema Fixed Fixed Fixed Fixed You define it

Synix is not a memory store. It's the build system that produces one.

How it works

Source files go in, a pipeline of LLM transforms processes them layer by layer, and typed artifacts come out — each tracked with a content-addressed fingerprint.

SourcesDrop in ChatGPT, Claude, or text exports — auto-detected by file structure
TransformsTyped Python classes: SourceEpisodeSummaryMonthlyRollupCoreSynthesis
ArtifactsEpisode summaries, monthly rollups, topical rollups, core memory
ProjectionsSearchIndex (FTS5 + optional semantic) and FlatFile for agent system prompts
LineageFull provenance chain tracing every artifact back to its source conversations

Synthesis operations

Every transform in Synix is a composition of four synthesis patterns. Choose the pattern that matches your data shape, provide a prompt, and Synix handles caching, provenance, and incremental rebuilds.

Transform Pattern Use when…
MapSynthesis 1:1 Each input gets its own LLM call
GroupSynthesis N:M Group inputs by metadata key, one output per group
ReduceSynthesis N:1 All inputs become a single output
FoldSynthesis N:1 sequential Accumulate through inputs one at a time
from synix import Pipeline, Source, SearchIndex
from synix.ext import MapSynthesis, ReduceSynthesis

pipeline = Pipeline("my-pipeline")
pipeline.source_dir = "./sources"
pipeline.build_dir = "./build"
pipeline.llm_config = {
    "provider": "anthropic",
    "model": "claude-haiku-4-5-20251001",
    "temperature": 0.3,
    "max_tokens": 1024,
}

bios = Source("bios", dir="./sources/bios")

work_styles = MapSynthesis(
    "work_styles",
    depends_on=[bios],
    prompt="Infer this person's work style in 2-3 sentences:\n\n{artifact}",
    artifact_type="work_style",
)

report = ReduceSynthesis(
    "report",
    depends_on=[work_styles],
    prompt="Write a team analysis from these profiles:\n\n{artifacts}",
    label="team-report",
    artifact_type="report",
)

pipeline.add(bios, work_styles, report)
pipeline.add(SearchIndex("search", sources=[work_styles, report], search=["fulltext"]))

The built-in transforms (EpisodeSummary, MonthlyRollup, TopicalRollup, CoreSynthesis) are pre-configured compositions of these patterns — ready to use for common agent memory architectures.

Built-in transforms

Pre-configured compositions of the synthesis patterns above. Import from synix.transforms and wire them together with depends_on.

Class What it does
EpisodeSummary 1 transcript → 1 episode summary
MonthlyRollup Group episodes by month, synthesize each
TopicalRollup Group episodes by user-defined topics
CoreSynthesis All rollups → single core memory document
Merge Group artifacts by content similarity (Jaccard)

Case study

14 months of conversations. One build.

1,871 conversations across Claude and ChatGPT. Two export formats, one pipeline definition, four layers — from raw chat exports to structured core memory with full provenance.

Template: 01-chatbot-export-synthesis

What you get

Fingerprint-based caching

Every artifact stores a build fingerprint — inputs, prompt, model config, transform source. Change any component and only affected artifacts rebuild.

Cache explainability

synix plan --explain-cache shows inline reasons for every cache hit or miss, so you can see exactly which fingerprint component caused a rebuild.

Altitude-aware search

Query episode summaries, monthly rollups, or core memory. Full-text and semantic/hybrid modes. Drill into provenance from any result.

Validation and repair Experimental

Detect semantic contradictions and PII leaks across artifacts, then fix them with LLM-assisted rewrites. APIs and output formats may change.

Quick start

# Scaffold a working project
$ uvx synix init my-project
$ cd my-project

# Build the pipeline
$ uvx synix build

# Browse what was built
$ uvx synix list
$ uvx synix show final-report          # resolves by prefix
$ uvx synix show final-report --raw    # full JSON with artifact IDs

# See what would rebuild and why
$ uvx synix plan --explain-cache

# Search and validate
$ uvx synix search "return policy"
$ uvx synix search "warranty terms" --mode hybrid  # semantic search
$ uvx synix validate                              # experimental

Built-in components

Layers are real Python objects. Import what you need, wire them together with depends_on.

Sources

Drop files into source_dir — auto-detected by file structure.

ChatGPT.jsonconversations.json exports. Handles regeneration branches.
Claude.json — Claude conversation exports with chat_messages arrays.
Text / Markdown.txt, .md — YAML frontmatter support. Auto-detects conversation turns.

Built-in Transforms

Pre-configured compositions of the synthesis patterns above. Import from synix.transforms:

EpisodeSummary1 transcript → 1 episode summary via LLM.
MonthlyRollupGroups episodes by calendar month, synthesizes each via LLM.
TopicalRollupGroups episodes by user-declared topics. Requires config={"topics": [...]}.
CoreSynthesisAll rollups → single core memory document. Respects context_budget.
MergeGroups artifacts by content similarity (Jaccard), merges above threshold.

Projections

Import from synix:

SearchIndexSQLite FTS5 index across selected layers. Optional embedding support for semantic/hybrid search.
FlatFileRenders artifacts as markdown. Ready for agent system prompts.

Validators & Fixers Experimental

Import from synix.validators and synix.fixers. APIs may change.

PIIDetects credit cards, SSNs, emails, phone numbers in content.
SemanticConflictLLM-based detection of contradictions across synthesized artifacts.
CitationVerifies artifacts cite their source artifacts with valid URIs.
SemanticEnrichmentResolves conflicts by rewriting with source context. Interactive approval.
CitationEnrichmentAdds missing citation references to artifacts.

CLI reference

synix initScaffold a new project with sources, pipeline, and README.
synix buildRun the pipeline. Only rebuilds what changed.
synix batch-buildExperimental Submit pipeline to OpenAI Batch API at 50% cost.
synix planDry-run — show what would build. --explain-cache for cache decisions.
synix listList all artifacts, optionally filtered by layer.
synix showDisplay artifact content. Resolves by label or ID prefix. --raw for JSON.
synix searchFull-text search across indexed layers. --mode hybrid for semantic.
synix lineageShow the full provenance chain for an artifact.
synix verifyCheck build integrity — hashes and provenance.
synix validateExperimental Run declared validators against build artifacts.
synix fixExperimental LLM-assisted repair of validation violations.
synix cleanDelete the build directory.

Start building

Declare your memory architecture. Build it. Change it.

uvx synix init Copied! View on GitHub