This guide is about the Vercel AI SDK — the TypeScript agent framework — not Vercel hosting. The choice of pattern depends on where your code actually runs:Documentation Index
Fetch the complete documentation index at: https://supermemory.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
- Self-hosted Node (your own VM, ECS, Fly.io, Railway, a Vercel Sandbox, etc.): you can mount SMFS as a real filesystem on the server.
- Vercel Functions / serverless / edge: there’s no long-lived process to
hold a FUSE mount, so use the Bash Tool
(
@supermemory/bash) instead. The container becomes the filesystem; no mount needed.
How it works
Self-hosted Node (real mount)
The agent runs as a separate process with direct access to the SMFS mount. Best when you want full bash, read, and write capabilities and your server is long-lived.Vercel Functions / serverless (Bash Tool)
The agent runs insidegenerateText and accesses memory through @supermemory/bash,
which proxies bash commands to your Supermemory container over HTTP. No mount,
no FUSE, no long-lived process required.
Prerequisites
- A Supermemory API key
- An Anthropic API key
- For Pattern A only: SMFS installed on your server (
curl -fsSL https://smfs.ai/install | bash)
Pattern A: Claude Agent SDK on self-hosted Node
Use this when the Vercel AI SDK is just the orchestrator and your real workload is a Claude agent running on a long-lived server you control. Start the mount once when your server boots — not per-request:This won’t work on Vercel Functions or any serverless runtime: there’s no
process between requests to hold the mount, and FUSE isn’t available. For
those targets, jump to Pattern B.
agent.py
Pattern B: Vercel AI SDK + Bash Tool (serverless-friendly)
@supermemory/bash exposes your Supermemory container as a single agent tool
— run_bash(command) — without mounting anything. It runs anywhere TypeScript
runs, including Vercel Functions, edge runtimes, and Lambda.
api/agent.ts
maxSteps: 10lets the agent chain multiple bash calls per request (readprofile.md, thencata few notes, then write a summary). Bump it if your agent needs deeper chains; lower it to cap cost per request.toolDescriptionis a pre-written description of the available bash surface (semanticsgrep,cat,ls, redirects, etc.). Hand it straight to the model — don’t roll your own.- No timeout/abort plumbing.
bash.execalready runs against the container over HTTPS, so it returns when the command returns. No event-loop blocking and no FUSE.
Tips
- Pattern A: mount SMFS once when your server starts, not per-request.
Use
--ephemeralif you don’t need a local cache on the server. - Pattern B: configure memory paths once at startup with
configureMemoryPaths(["/notes/", "/journal.md"])to control which files get distilled into Supermemory memories. - Both: use
smfs grep 'query'(Pattern A) orsgrep 'query'inside the bash tool (Pattern B) for semantic search across all files.