Memory Infrastructure for AI Agents
Store facts, decisions, and context as memories. Retrieve them later — semantically, across graph edges, not just by keyword. Works with any LLM stack.
User prefers dark mode and uses VSCode
Project uses Turborepo + pnpm workspaces
Auth flow uses JWT with refresh tokens
Direct HTTP from any language or curl. Best for custom integrations, scripts, and backend services.
Local server exposing memory as MCP tools. Works natively in Cursor, Claude Code, and Agno.
Drop-in memory layer for LangChain agents. Store and retrieve context across chains and sessions.
Get started in 3 steps
mb_live_.export MEMBRAIN_API_KEY="mb_live_xxx"curl -X POST https://api.mem-brain.io/api/v1/memories \\ -H "X-API-Key: $MEMBRAIN_API_KEY" \\ -H "Content-Type: application/json" \\ -d '{"content": "User prefers dark mode", "tags": ["type.preference", "domain.ui"]}'202 Accepted with a job_id. The playground below polls it automatically.curl -X POST https://api.mem-brain.io/api/v1/memories/search \\ -H "X-API-Key: $MEMBRAIN_API_KEY" \\ -H "Content-Type: application/json" \\ -d '{"query": "What UI preferences does this user have?", "k": 5, "response_format": "interpreted", "scope": ["domain.ui"], "rerank": false, "full_scope": false}'response_format: "interpreted" to get a plain-language LLM summary — inject directly into a system prompt.How it works
Semantic storage
Every memory is embedded and stored as a node in a knowledge graph. Related memories are linked automatically at write time.
Agentic API Guide
One Markdown file: HTTP auth, async jobs, search modes (including scope, rerank, full_scope), tool wrappers, and copy-ready prompts for LangChain, Vercel AI SDK, AutoGen, or your own stack.
Ready to try it live? Open the interactive playground →