Using Vercel AI SDK? Check out the AI SDK integration for the cleanest implementation with
@supermemory/tools/ai-sdk
.Memory API
Step 1. Sign up for Supermemory’s Developer Platform to get the API key. Click on API Keys -> Create API Key to generate one.
Step 2. Install the Supermemory clientStep 3. Run this in your terminal to create an environment variable with your API key:Step 4. Import the module in your python file:Step 5. Add your first memory as follows:Run your code. The output is as follows:Step 6. Search for this memory as follows:The output is as follows:Awesome! Now that you’ve made your first request, explore all of Supermemory’s features in detail and how you can use them in your app.
Memory Router
Learn how you can add the Memory Router to your existing LLM requests. The memory router works as a proxy on top of LLM calls. When conversations get very long, it automatically chunks them for optimal performance, retrieves the most relevant information from the history, and balances token usage + cost. The best part is that it requires no changes to your application logic. Here’s how to get started: Step 1. Sign up for Supermemory’s Developer Platform to get the API key. Click on API Keys -> Create API Key to generate one. Step 2. Get your LLM provider’s API key Step 3. Append Supermemory’s URL to your LLM provider’s OpenAI-compatible API URL:Step 4. Install the dependenciesStep 5. Set two environment variables in your environment: one for Supermemory, and one for your model provider.Step 6. Send a request to the updated endpoint:Each of these code snippets changes the Base URL based on the OpenAI-compatible API URL given by the model providers. Some of the key parameters to note are:After that, if you modify the request to ask, ‘What is my name?’ instead, you’ll get the following response:Thus, the memory router is working!
apiKey
: Your model provider’s API keyx-supermemory-api-key
: Your Supermemory API keyx-sm-user-id
: Scope conversations by user with a user ID. This will enable cross-conversation memory, meaning users can reference other chats and draw information from them.
x-sm-conversation-id
header.Then, you won’t have to send the entire array of messages to the LLM as conversation history. Supermemory will handle it.If you run the above code blocks, you’ll get an output from your LLM like this: