Alpha — now accepting design partners

Cloud sandbox for AI agents.
Fork in 100 ms. Persistent REPL.

Firecracker microVMs with copy-on-write snapshot forking. Your agents get a fresh Linux box in 7 ms, branch it N ways for tree-search, and keep Python state across run_code calls.

from podflare import Sandbox

sbx = Sandbox(template="python-datasci")
print(sbx.run_code("print(sum(range(100)))").stdout)  # 4950

# Fork the running sandbox for agent tree-search
children = sbx.fork(n=5)          # 101 ms for 5 copies
children[0].run_code("x = 1")     # isolated per branch

pip install podflare · also available on npm as podflare and podflare-mcp for Claude Desktop / Cursor / Cline.

101 ms

fork(n)

Diff-snapshot 5 copies of a running sandbox. CoW memory + reflinked rootfs, so branches diverge for free.

0 ms reimport

Persistent REPL

Variables, imports, open files survive across run_code calls. python-datasci template preloads pandas, numpy, scipy, matplotlib.

4 adapters

Every framework

OpenAI Agents SDK, Vercel AI SDK, Anthropic code_execution, MCP. Drop-in replacement for any tool-use pattern.

Why agents need this

Tree search without state hell

Classic sandboxes give you one shell per agent. Branching a long-running computation means re-running setup from scratch. Forking lets an agent explore N hypotheses from a shared ancestor state — each branch gets the parent's memory, files, environment.

Secure by construction

Every sandbox is a Firecracker microVM with its own KVM boundary. No egress by default — your agents can't exfiltrate data or call arbitrary APIs without opt-in. See the egress model.

Five minutes from sign-up to first sandbox.

Mint a key, pip install podflare, you're running.