Persistent AI memory backend with semantic search and knowledge graph...
Universal AI memory backend — give any AI model persistent, structured memory that survives sessions, systems, and restarts.
| Category | Tools |
|---|---|
| Memory | store_memory, search_memory, recall_context, add_knowledge, forget |
| Stats | memory_stats, index_status, embedding_status |
| Lifecycle | decay_sweep, decay_policy, re_embed |
| Contradictions | check_contradictions, resolve_contradiction |
| Tags | list_tags, tag_memory |
| Webhooks | webhook_subscribe, webhook_list |
| Plugins | plugin_list |
Session start → recall_context("current task or question")
During session → store_memory("decision or finding", type="episodic")
Session end → store_memory("session summary", importance=0.8)
recall_context — primary recall tool. Assembles the most relevant memories for a query and returns formatted context ready to inject into prompts. Call at the start of every session.store_memory — saves episodic events, semantic facts (type="semantic", concept="...") or procedural patterns (type="procedural").search_memory — semantic similarity search with optional type filter and threshold.check_contradictions — detects memories that conflict with a given memory ID.resolve_contradiction — resolves conflicts via strategy: keep_newest, keep_oldest, keep_important, keep_both, or manual.decay_sweep — runs Ebbinghaus forgetting curve decay; archives stale memories and consolidates old episodes into facts.| Variable | Default | Description |
|---|---|---|
ENGRAM_DB_PATH |
~/.engram/engram.db |
SQLite database path |
ENGRAM_NAMESPACE |
(global) | Isolate memories per project |
ENGRAM_EMBEDDING_MODEL |
Xenova/all-MiniLM-L6-v2 |
Local ONNX embedding model |