Build chat applications with unlimited context using supermemory’s intelligent proxy
Getting an API Key
Create an account
Create an API Key
Navigate to API Keys
Create API Key
Choose Name and Expiry (Optional)
Create
Copy New Key
Transparent Proxying
Intelligent Chunking
Smart Retrieval
Automatic Token Management
Reduced Token Usage
Unlimited Context
Improved Response Quality
Zero Performance Penalty
100k tokens stored at no cost
$20/month fixed cost after exceeding free tier
Each thread includes 20k free tokens, then $1 per million tokens thereafter
Header | Description |
---|---|
x-supermemory-conversation-id | Unique identifier for the conversation thread |
x-supermemory-context-modified | Indicates whether supermemory modified the context (“true” or “false”) |
x-supermemory-tokens-processed | Number of tokens processed in this request |
x-supermemory-chunks-created | Number of new chunks created from this conversation |
x-supermemory-chunks-deleted | Number of chunks removed (if any) |
x-supermemory-docs-deleted | Number of documents removed (if any) |
x-supermemory-error
will be included with details about what went wrong. Your request will still be processed by the underlying LLM provider even if supermemory encounters an error.