Sandboxed Previews, Object Storage, and Queue-Driven Agents

Why low-code builders lean on ephemeral runtimes, S3-shaped assets, and worker pools instead of giant API monoliths.

  • sandbox
  • queues
  • workers
  • postgres
  • nextjs

I have worked on surfaces where users describe software in natural language and expect a preview or a deployed result. Those products all seem to converge on a few pieces: somewhere safe to run code you did not write, somewhere durable to put the artifacts, and a layer that can fan out work when one user action implies many steps.

I am grateful to teammates who pushed for sandboxed execution early. Running user-influenced bundles inside the same process as your API is not mainly a scaling problem—it is a trust problem. Disposable Linux sandboxes (from providers you choose) gave us timeouts, CPU limits, filesystem snapshots when something broke, and the ability to throw away the whole environment after a preview. The API surface became: start sandbox, stream logs, tear down—not running untrusted code with eval beside your session store.

Keeping blobs out of the app server

Generated trees belonged in S3-compatible storage; we kept pointers and metadata in Postgres. An ORM like Prisma made the “project → version → artifact” relationship manageable as the product iterated.

CREATE TABLE deploy_artifact (
  id            uuid PRIMARY KEY,
  project_id    uuid NOT NULL REFERENCES project(id),
  storage_key   text NOT NULL,
  runtime_hint  text,
  created_at    timestamptz NOT NULL DEFAULT now()
);

Workers pulled by storage_key; builders wrote the key after upload. Streaming large tarballs through the web tier was something we avoided once we learned how quickly that hurt latency and memory.

Queues and retries

When a single action meant lint, test, bundle, deploy, notify, we kept the HTTP handler thin: enqueue work, return a job id, let the client poll or subscribe to status. Something like BullMQ-style workers let us scale by adding workers instead of enlarging one giant process.

I learned to set retry policy per job type. A flaky sandbox API might deserve a couple of retries; a syntax error in generated code should fail fast and surface clearly to the user—not spin for an hour and burn GPU or quota.

Long-lived deploys

When the product needed something to stay up beyond a preview window, we leaned on a managed hosting path separate from the ephemeral sandbox. The details vary by provider; what stayed constant was separating “build artifact” from “runtime serving traffic,” and keeping secrets out of generated repos.

I share this structure not because every project needs every piece, but because it was the shape that kept our AI builder feeling responsive while the heavy work stayed bounded and observable. If you are designing something similar, I hope the split helps you reason about where to put each concern.