TypeScript SDK
Install
npm install @frumu/tandem-clientRequires Node.js 18+ (uses native fetch and ReadableStream).
For recurring jobs and scheduled automations, see Scheduling Workflows And Automations.
Engine prerequisite
The SDK talks to a running tandem-engine over HTTP/SSE. Install and start the engine first:
npm install -g @frumu/tandemtandem-engine serve --api-token "$(tandem-engine token generate)"Then pass the same token into new TandemClient({ baseUrl, token }).
Quick start
import { TandemClient } from "@frumu/tandem-client";
const client = new TandemClient({ baseUrl: "http://localhost:39731", token: "your-engine-token", // tandem-engine token generate});
// 1. Create a sessionconst sessionId = await client.sessions.create({ title: "My agent", directory: "/path/to/project",});
// 2. Start an async runconst { runId } = await client.sessions.promptAsync( sessionId, "Summarize the README and list the top 3 TODOs");
// 3. Stream the responsefor await (const event of client.stream(sessionId, runId)) { if (event.type === "session.response") { process.stdout.write(String(event.properties.delta ?? "")); } if ( event.type === "run.complete" || event.type === "run.completed" || event.type === "run.failed" || event.type === "session.run.finished" ) { break; }}TandemClient
new TandemClient({ baseUrl, token, timeoutMs? })Top-level methods
| Method | Returns | Description |
|---|---|---|
health() | SystemHealth | Check engine readiness |
stream(sessionId, runId?) | AsyncGenerator<EngineEvent> | Stream events from an active run |
globalStream() | AsyncGenerator<EngineEvent> | Stream all engine events |
runEvents(runId, { sinceSeq?, tail? }) | EngineEvent[] | Pull stored run events |
listToolIds() | string[] | List all tool IDs |
listTools() | ToolSchema[] | List tools with full schemas |
executeTool(tool, args?) | ToolExecuteResult | Execute a tool directly |
client.sessions
| Method | Description |
|---|---|
create({ title?, directory?, provider?, model? }) | Create a session, returns sessionId |
list({ q?, page?, pageSize?, archived?, scope?, workspace? }) | List sessions |
get(sessionId) | Get session details |
update(sessionId, { title?, archived? }) | Update title or archive status |
archive(sessionId) | Archive a session |
delete(sessionId) | Permanently delete a session |
messages(sessionId) | Get full message history |
todos(sessionId) | Get pending TODOs |
activeRun(sessionId) | Get the currently active run |
promptAsync(sessionId, prompt) | Start async run → { runId } |
promptSync(sessionId, prompt) | Blocking prompt → reply text |
abort(sessionId) | Abort the active run |
cancel(sessionId) | Cancel the active run |
cancelRun(sessionId, runId) | Cancel a specific run |
fork(sessionId) | Fork into a child session |
diff(sessionId) | Get workspace diff from last run |
revert(sessionId) | Revert uncommitted changes |
unrevert(sessionId) | Undo a revert |
children(sessionId) | List forked child sessions |
summarize(sessionId) | Trigger conversation summarization |
attach(sessionId, targetWorkspace) | Re-attach to a different workspace |
Prompt with file parts
Use raw engine route when you need mixed parts payloads:
const res = await fetch(`/session/${encodeURIComponent(sessionId)}/prompt_async?return=run`, { method: "POST", headers: { "content-type": "application/json", Authorization: `Bearer ${token}`, }, body: JSON.stringify({ parts: [ { type: "file", mime: "text/markdown", filename: "audit.md", url: "/srv/tandem/channel_uploads/control-panel/audit.md", }, { type: "text", text: "Summarize this file." }, ], }),});const run = await res.json();file part shape:
type:"file"mime: MIME type stringfilename: optional display filenameurl: HTTP URL, local path, orfile://...
client.permissions
const { requests } = await client.permissions.list();for (const req of requests) { await client.permissions.reply(req.id, "always"); // "allow" | "always" | "deny" | "once"}client.questions
const { questions } = await client.questions.list();for (const q of questions) { await client.questions.reply(q.id, "yes"); // or: await client.questions.reject(q.id);}client.providers
const catalog = await client.providers.catalog();await client.providers.setDefaults("openrouter", "anthropic/claude-3.7-sonnet");await client.providers.setApiKey("openrouter", "sk-or-...");const status = await client.providers.authStatus();client.identity
const identity = await client.identity.get();
await client.identity.patch({ identity: { bot: { canonical_name: "Ops Assistant" }, personality: { default: { preset: "concise", custom_instructions: "Prioritize deployment safety and rollback clarity.", }, }, },});Built-in presets include: balanced, concise, friendly, mentor, critical.
client.channels
await client.channels.put("discord", { bot_token: "bot:xxx", guild_id: "1234567890", security_profile: "public_demo",});const status = await client.channels.status();const config = await client.channels.config();const prefs = await client.channels.toolPreferences("discord");await client.channels.setToolPreferences("discord", { disabled_tools: ["webfetch_html"],});const verification = await client.channels.verify("discord");
console.log(status.discord.connected);console.log(config.discord.securityProfile);console.log(prefs.enabled_tools);console.log(verification.ok);client.mcp
await client.mcp.add({ name: "arcade", transport: "https://mcp.arcade.ai/mcp" });await client.mcp.connect("arcade");const tools = await client.mcp.listTools();const resources = await client.mcp.listResources();await client.mcp.setEnabled("arcade", false);client.memory
// Store (global record; SDK `text` maps to server `content`)await client.memory.put({ text: "The team uses Rust for all backend services.", run_id: "run-abc",});
// Searchconst { results } = await client.memory.search({ query: "backend technology choices", limit: 5,});
// List, promote, demote, deleteconst { items } = await client.memory.list({ q: "architecture", userId: "user-123" });await client.memory.promote({ id: items[0].id! });await client.memory.demote({ id: items[0].id!, runId: "run-abc" });await client.memory.delete(items[0].id!);
// Auditconst log = await client.memory.audit({ run_id: "run-abc" });Context Memory (L0/L1/L2 layers)
// Resolve a URI to a memory nodeconst { node } = await client.memory.contextResolveUri("tandem://user/user123/memories");
// Get a tree of memory nodesconst { tree } = await client.memory.contextTree("tandem://resources/myproject", { maxDepth: 3 });
// Generate L0/L1 layers for a nodeawait client.memory.contextGenerateLayers("node-id-123");
// Distill a session conversation into memoriesconst result = await client.memory.contextDistill("session-abc", [ "User: I prefer Python over Rust", "Assistant: Got it, I'll use Python for this task",]);Additional namespaces
The TypeScript SDK also exposes the newer engine surfaces used across the Tandem repo:
client.browserforstatus(),install(), andsmokeTest()host flowsclient.workflowsfor workflow registry, runs, hooks, simulation, and live eventsclient.resourcesfor key-value resourcesclient.skillsfor validation, routing, evals, compile, and generate flows in addition to list/get/importclient.packsandclient.capabilitiesfor pack lifecycle and capability resolutionclient.automationsV2,client.bugMonitor,client.coder,client.agentTeams,client.missions, andclient.optimizationsfor newer orchestration APIs
const browser = await client.browser.status();const workflows = await client.workflows.list();const resources = await client.resources.list({ prefix: "agent-config/" });const catalog = await client.skills.catalog();For actual browser automation, use the standard engine tool execution path with tools like browser_open, browser_click, browser_type, browser_extract, and browser_screenshot, or run a session with those tools in the allowlist. The client.browser namespace is intentionally limited to diagnostics and install flows.
client.coder
The coder namespace now includes project-scoped GitHub Project intake helpers in addition to run APIs.
const binding = await client.coder.getProjectBinding("repo-123");
await client.coder.putProjectBinding("repo-123", { github_project_binding: { owner: "acme-inc", project_number: 7, repo_slug: "acme-inc/tandem", },});
const inbox = await client.coder.getProjectGithubInbox("repo-123");
const intake = await client.coder.intakeProjectItem("repo-123", { project_item_id: inbox.items[0].project_item_id, source_client: "sdk_test",});Use this flow when you want Tandem to:
- treat GitHub Projects as intake plus visibility
- create Tandem-native coder runs from issue-backed TODO items
- keep Tandem as the execution authority after intake
- inspect schema drift through
schema_drift/live_schema_fingerprint
client.skills
const { skills } = await client.skills.list();const skill = await client.skills.get("security-auditor");const templates = await client.skills.templates();
await client.skills.import({ location: "workspace", content: yamlString, conflict_policy: "overwrite",});client.resources
await client.resources.write({ key: "agent-config/alert-threshold", value: { threshold: 0.95 },});const { items } = await client.resources.list({ prefix: "agent-config/" });await client.resources.delete("agent-config/alert-threshold");client.routines
await client.routines.create({ name: "Daily digest", schedule: "0 8 * * *", entrypoint: "Summarize today's activity and write to daily-digest.md", requires_approval: false,});
const runs = await client.routines.listRuns({ limit: 10 });await client.routines.approveRun(runs.runs[0].id as string);await client.routines.pauseRun(runId);await client.routines.resumeRun(runId);client.automationsV2
Use V2 for persistent multi-agent DAG flows with per-agent model selection.
await client.automationsV2.create({ name: "Daily Marketing Engine", status: "active", schedule: { type: "interval", interval_seconds: 86400, timezone: "UTC", misfire_policy: "run_once", }, agents: [ { agent_id: "research", display_name: "Research", model_policy: { default_model: { provider_id: "openrouter", model_id: "openai/gpt-4o-mini", }, }, tool_policy: { allowlist: ["read", "websearch"], denylist: [] }, mcp_policy: { allowed_servers: ["composio"] }, }, { agent_id: "writer", display_name: "Writer", model_policy: { default_model: { provider_id: "openrouter", model_id: "anthropic/claude-3.5-sonnet", }, }, tool_policy: { allowlist: ["read", "write", "edit"], denylist: [] }, mcp_policy: { allowed_servers: [] }, }, ], flow: { nodes: [ { node_id: "market-scan", agent_id: "research", objective: "Find trends and audience signals.", }, { node_id: "draft-copy", agent_id: "writer", objective: "Draft campaign copy and CTA variants.", depends_on: ["market-scan"], }, ], },});
const runs = await client.automationsV2.listRuns("automation-v2-id", 20);await client.automationsV2.pauseRun(runs.runs[0].run_id!);await client.automationsV2.resumeRun(runs.runs[0].run_id!);client.automations (Legacy Compatibility Path)
Use this for existing installs that still rely on the older mission + policy automation shape. For new automation work, prefer client.automationsV2.
await client.automations.create({ name: "Weekly security scan", schedule: "0 9 * * 1", mission: { objective: "Audit the API surface for vulnerabilities", success_criteria: ["Report written to reports/security.md"], }, policy: { tool: { external_integrations_allowed: false }, approval: { requires_approval: true }, },});
const run = await client.automations.getRun(runId);await client.automations.approveRun(runId, "LGTM");client.workflowPlans
Use workflow plans when you want the engine planner to draft an automation, iterate on it in chat, then apply it.
const started = await client.workflowPlans.chatStart({ prompt: "Create a release checklist automation", planSource: "chat",});
const updated = await client.workflowPlans.chatMessage({ planId: started.plan.plan_id!, message: "Add a smoke-test step before rollout.",});
await client.workflowPlans.apply({ planId: updated.plan.plan_id, creatorId: "operator-1",});client.agentTeams
const templates = await client.agentTeams.listTemplates();const instances = await client.agentTeams.listInstances({ status: "active" });
const result = await client.agentTeams.spawn({ missionID: "mission-123", role: "builder", justification: "Implementing feature X",});
const { spawnApprovals } = await client.agentTeams.listApprovals();await client.agentTeams.approveSpawn(spawnApprovals[0].approvalID!);
await client.agentTeams.createTemplate({ template: { templateID: "marketing-writer", role: "worker", system_prompt: "Write concise conversion-focused copy.", },});await client.agentTeams.updateTemplate("marketing-writer", { system_prompt: "Write concise copy with product-proof points.",});await client.agentTeams.deleteTemplate("marketing-writer");client.missions
const { mission } = await client.missions.create({ title: "Q1 Security Hardening", goal: "Audit and fix all critical security issues", work_items: [{ title: "Audit auth middleware", assigned_agent: "security-auditor" }],});
const full = await client.missions.get(mission!.id!);await client.missions.applyEvent(mission!.id!, { type: "work_item.completed", work_item_id: "...",});client.optimizations
Use optimizations to create and manage AutoResearch workflow optimization campaigns. Campaigns generate candidate workflow prompts, evaluate them against baseline runs, and apply approved winners back to the live workflow.
// List all optimization campaignsconst { optimizations, count } = await client.optimizations.list();
// Create a new optimization campaignconst { optimization } = await client.optimizations.create({ name: "Improve research quality", source_workflow_id: "workflow-abc123", artifacts: { objective_ref: "objective.yaml", eval_ref: "eval.yaml", mutation_policy_ref: "mutation_policy.yaml", scope_ref: "scope.yaml", budget_ref: "budget.yaml", },});
// Get campaign details with experiment countconst details = await client.optimizations.get(optimization.optimization_id!);
// Trigger actions on a campaign (e.g., queue baseline replay, generate candidates)await client.optimizations.action(optimization.optimization_id!, { action: "queue_replay", run_id: "run-xyz",});
// List experiments for a campaignconst { experiments } = await client.optimizations.listExperiments(optimization.optimization_id!);
// Get a specific experimentconst experiment = await client.optimizations.getExperiment( optimization.optimization_id!, experiments[0].experiment_id!);
// Apply an approved winner back to the live workflowconst { automation } = await client.optimizations.applyWinner( optimization.optimization_id!, experiments[0].experiment_id!);Available campaign actions via action():
queue_replay— Queue a baseline replay run to re-establish metricsgenerate_candidate— Generate the next bounded candidate for evaluationapprove/reject— Mark an experiment as approved or rejectedapply— Apply an approved winner to the live workflow