POST /run
{
"script": "print(7 + 7)"
}
→
{
"execution_id": "2f1f8a50-0b70-4af0-b9ba-b9d4b9ef2d56",
"plan": "Starter",
"used": 1,
"quota": 3000
}
Disposable-exec gives AI products a hosted execution layer for short-lived Python jobs. It is not a general cloud compute platform. It is meant for safer execution of untrusted or AI-generated code.
Run generated code in an isolated Docker sandbox instead of on your main application host or customer machine.
Use an API instead of building and operating your own queue, worker, auth, quota, and sandbox layer.
Designed for agents, automations, and LLM-powered tools that need fast remote execution for short-lived tasks.
The model is simple: submit code, process it through a queue, run it in a restricted worker sandbox, and return the result.
POST a Python script to /run with an API key.
The worker pulls the job from Redis and runs it inside a restricted Docker environment.
Poll /status/{execution_id} and fetch stdout, stderr, exit code, and duration from /result/{execution_id}.
Teams that need a remote tool for short Python execution without exposing their main systems.
Builders who want to execute generated logic without maintaining a full sandbox stack.
Products that need a hosted runtime layer with keys, quotas, and billing hooks already started.
Disposable-exec is currently a launch-stage MVP with a working execution core. The focus now is security hardening, billing closure, deployment preparation, and documentation cleanup.
The current pricing model is designed to keep higher tiers discounted without turning the product into a low-margin bulk execution channel.
See the full breakdown on the pricing page.