Use this file to discover all available pages before exploring further.
Integrate Supermemory with Mastra to give your AI agents persistent memory. Use the withSupermemory wrapper for zero-config setup or processors for fine-grained control.
Wrap your agent config with withSupermemory to add memory capabilities:
import { Agent } from "@mastra/core/agent"import { withSupermemory } from "@supermemory/tools/mastra"import { openai } from "@ai-sdk/openai"// Create agent with memory-enhanced configconst agent = new Agent(withSupermemory( { id: "my-assistant", name: "My Assistant", model: openai("gpt-4o"), instructions: "You are a helpful assistant.", }, { containerTag: "user-123", // Required: scopes memories to this user customId: "conv-456", // Required: groups messages for contextual memory mode: "full", }))const response = await agent.generate("What do you know about me?")
Memory saving is enabled by default. Conversations are automatically saved to Supermemory. To disable saving:
Full Mode - Combines profile AND query-based search for maximum context:
const agent = new Agent(withSupermemory(config, { containerTag: "user-123", customId: "conv-456", mode: "full",}))### Mode Comparison| Mode | Description | Use Case ||------|-------------|----------|| `profile` | Static + dynamic user facts | General personalization || `query` | Semantic search on user message | Specific Q&A || `full` | Both profile and search | Chatbots, assistants |---## Saving ConversationsConversation saving is enabled by default (`addMemory: "always"`). Messages are grouped using the required `customId`:```typescriptconst agent = new Agent(withSupermemory( { id: "my-assistant", model: openai("gpt-4o"), instructions: "..." }, { containerTag: "user-123", customId: "conv-456", // Required: groups messages for contextual memory }))// All messages in this conversation are saved automaticallyawait agent.generate("I prefer TypeScript over JavaScript")await agent.generate("My favorite framework is Next.js")
To disable automatic saving:
const agent = new Agent(withSupermemory( { id: "my-assistant", model: openai("gpt-4o"), instructions: "..." }, { containerTag: "user-123", customId: "conv-456", addMemory: "never", // Only retrieve memories, don't save }))
Customize how memories are formatted and injected. The template receives userMemories, generalSearchMemories, and searchResults (raw array for filtering by metadata):
For server setups where one agent instance handles multiple concurrent conversations, use Mastra’s RequestContext to provide per-request thread IDs. RequestContext takes precedence over the construction-time customId:
import { Agent } from "@mastra/core/agent"import { RequestContext, MASTRA_THREAD_ID_KEY } from "@mastra/core/request-context"import { withSupermemory } from "@supermemory/tools/mastra"import { openai } from "@ai-sdk/openai"const agent = new Agent(withSupermemory( { id: "my-assistant", model: openai("gpt-4o"), instructions: "..." }, { containerTag: "user-123", customId: "fallback-conv", // Used only when RequestContext doesn't provide a threadId mode: "full", }))// Per-request threadId takes precedence over customIdconst ctx = new RequestContext()ctx.set(MASTRA_THREAD_ID_KEY, "user-456-session-789")await agent.generate("Hello!", { requestContext: ctx })// This conversation is stored under "user-456-session-789", not "fallback-conv"
Server-side usage: Always use RequestContext to pass unique conversation IDs per request. Using a fixed customId for all requests will merge conversations from different users.