Add memory capabilities to the official OpenAI SDKs using Supermemory. Two approaches available:
withSupermemory wrapper - Automatic memory injection into system prompts (zero-config)
Function calling tools - Explicit tool calls for search/add memory operations
New to Supermemory? Start with withSupermemory for the simplest integration. It automatically injects relevant memories into your prompts.
withSupermemory Wrapper
The simplest way to add memory to your OpenAI client. Wraps your client to automatically inject relevant memories into system prompts.
Installation
npm install @supermemory/tools openai
Quick Start
import OpenAI from "openai"
import { withSupermemory } from "@supermemory/tools/openai"
const openai = new OpenAI ()
// Wrap client with memory - memories auto-injected into system prompts
const client = withSupermemory ( openai , "user-123" , {
mode: "full" , // "profile" | "query" | "full"
addMemory: "always" , // "always" | "never"
})
// Use normally - memories are automatically included
const response = await client . chat . completions . create ({
model: "gpt-5" ,
messages: [
{ role: "system" , content: "You are a helpful assistant." },
{ role: "user" , content: "What's my favorite programming language?" }
]
})
Configuration Options
const client = withSupermemory ( openai , "user-123" , {
// Memory search mode
mode: "full" , // "profile" (user profile only), "query" (search only), "full" (both)
// Auto-save conversations as memories
addMemory: "always" , // "always" | "never"
// Group messages into conversations
conversationId: "conv-456" ,
// Enable debug logging
verbose: true ,
// Custom API endpoint
baseUrl: "https://custom.api.com"
})
Modes Explained
Mode Description Use Case profileInjects user profile (static + dynamic facts) General personalization querySearches memories based on user message Question answering fullBoth profile and query-based search Best for chatbots
Works with Responses API Too
const client = withSupermemory ( openai , "user-123" , { mode: "full" })
// Memories injected into instructions
const response = await client . responses . create ({
model: "gpt-5" ,
instructions: "You are a helpful assistant." ,
input: "What do you know about me?"
})
Environment Variables
SUPERMEMORY_API_KEY = your_supermemory_key
OPENAI_API_KEY = your_openai_key
For explicit control over memory operations, use function calling tools. The model decides when to search or add memories.
Installation
Python
JavaScript/TypeScript
# Using uv (recommended)
uv add supermemory-openai-sdk
# Or with pip
pip install supermemory-openai-sdk
Quick Start
Python SDK
JavaScript/TypeScript SDK
import asyncio
import openai
from supermemory_openai import SupermemoryTools, execute_memory_tool_calls
async def main ():
# Initialize OpenAI client
client = openai.AsyncOpenAI( api_key = "your-openai-api-key" )
# Initialize Supermemory tools
tools = SupermemoryTools(
api_key = "your-supermemory-api-key" ,
config = { "project_id" : "my-project" }
)
# Chat with memory tools
response = await client.chat.completions.create(
model = "gpt-5" ,
messages = [
{
"role" : "system" ,
"content" : "You are a helpful assistant with access to user memories."
},
{
"role" : "user" ,
"content" : "Remember that I prefer tea over coffee"
}
],
tools = tools.get_tool_definitions()
)
# Handle tool calls if present
if response.choices[ 0 ].message.tool_calls:
tool_results = await execute_memory_tool_calls(
api_key = "your-supermemory-api-key" ,
tool_calls = response.choices[ 0 ].message.tool_calls,
config = { "project_id" : "my-project" }
)
print ( "Tool results:" , tool_results)
print (response.choices[ 0 ].message.content)
asyncio.run(main())
Configuration
Python Configuration
JavaScript Configuration
from supermemory_openai import SupermemoryTools
tools = SupermemoryTools(
api_key = "your-supermemory-api-key" ,
config = {
"project_id" : "my-project" , # or use container_tags
"base_url" : "https://custom-endpoint.com" , # optional
}
)
Search Memories
Search through user memories using semantic search:
# Search memories
result = await tools.search_memories(
information_to_get = "user preferences" ,
limit = 10 ,
include_full_docs = True
)
print ( f "Found { len (result.memories) } memories" )
Add Memory
Store new information in memory:
# Add memory
result = await tools.add_memory(
memory = "User prefers tea over coffee"
)
print ( f "Added memory with ID: { result.memory.id } " )
Use tools separately for more granular control:
Python Individual Tools
JavaScript Individual Tools
from supermemory_openai import (
create_search_memories_tool,
create_add_memory_tool
)
search_tool = create_search_memories_tool( "your-api-key" )
add_tool = create_add_memory_tool( "your-api-key" )
# Use individual tools in OpenAI function calling
tools_list = [search_tool, add_tool]
Complete Chat Example
Here’s a complete example showing a multi-turn conversation with memory:
Complete Python Example
Complete JavaScript Example
import asyncio
import openai
from supermemory_openai import SupermemoryTools, execute_memory_tool_calls
async def chat_with_memory ():
client = openai.AsyncOpenAI()
tools = SupermemoryTools(
api_key = "your-supermemory-api-key" ,
config = { "project_id" : "chat-example" }
)
messages = [
{
"role" : "system" ,
"content" : """You are a helpful assistant with memory capabilities.
When users share personal information, remember it using addMemory.
When they ask questions, search your memories to provide personalized responses."""
}
]
while True :
user_input = input ( "You: " )
if user_input.lower() == 'quit' :
break
messages.append({ "role" : "user" , "content" : user_input})
# Get AI response with tools
response = await client.chat.completions.create(
model = "gpt-5" ,
messages = messages,
tools = tools.get_tool_definitions()
)
# Handle tool calls
if response.choices[ 0 ].message.tool_calls:
messages.append(response.choices[ 0 ].message)
tool_results = await execute_memory_tool_calls(
api_key = "your-supermemory-api-key" ,
tool_calls = response.choices[ 0 ].message.tool_calls,
config = { "project_id" : "chat-example" }
)
messages.extend(tool_results)
# Get final response after tool execution
final_response = await client.chat.completions.create(
model = "gpt-5" ,
messages = messages
)
assistant_message = final_response.choices[ 0 ].message.content
else :
assistant_message = response.choices[ 0 ].message.content
messages.append({ "role" : "assistant" , "content" : assistant_message})
print ( f "Assistant: { assistant_message } " )
# Run the chat
asyncio.run(chat_with_memory())
Error Handling
Handle errors gracefully in your applications:
Python Error Handling
JavaScript Error Handling
from supermemory_openai import SupermemoryTools
import openai
async def safe_chat ():
try :
client = openai.AsyncOpenAI()
tools = SupermemoryTools( api_key = "your-api-key" )
response = await client.chat.completions.create(
model = "gpt-5" ,
messages = [{ "role" : "user" , "content" : "Hello" }],
tools = tools.get_tool_definitions()
)
except openai.APIError as e:
print ( f "OpenAI API error: { e } " )
except Exception as e:
print ( f "Unexpected error: { e } " )
API Reference
Python SDK
Constructor
SupermemoryTools(
api_key: str ,
config: Optional[SupermemoryToolsConfig] = None
)
Methods
get_tool_definitions() - Get OpenAI function definitions
search_memories(information_to_get, limit, include_full_docs) - Search user memories
add_memory(memory) - Add new memory
execute_tool_call(tool_call) - Execute individual tool call
execute_memory_tool_calls(
api_key: str ,
tool_calls: List[ToolCall],
config: Optional[SupermemoryToolsConfig] = None
) -> List[ dict ]
JavaScript SDK
supermemoryTools (
apiKey : string ,
config ?: { projectId? : string ; baseUrl ?: string }
)
createToolCallExecutor (
apiKey : string ,
config ?: { projectId? : string ; baseUrl ?: string }
) -> ( toolCall : OpenAI . Chat . ChatCompletionMessageToolCall ) => Promise < any >
Environment Variables
Set these environment variables:
SUPERMEMORY_API_KEY = your_supermemory_key
OPENAI_API_KEY = your_openai_key
SUPERMEMORY_BASE_URL = https://custom-endpoint.com # optional
Development
Python Setup
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Setup project
git clone < repository-ur l >
cd packages/openai-sdk-python
uv sync --dev
# Run tests
uv run pytest
# Type checking
uv run mypy src/supermemory_openai
# Formatting
uv run black src/ tests/
uv run isort src/ tests/
JavaScript Setup
# Install dependencies
npm install
# Run tests
npm test
# Type checking
npm run type-check
# Linting
npm run lint
Next Steps