How executor pools work
Every runtime node starts aLeaseManager process that owns a set of named pools. Each pool has a fixed capacity — the maximum number of sandbox slots it can grant simultaneously. When an executor agent is ready to run worker code, it requests a lease from the pool for a given number of slots. If capacity is available, the lease is granted immediately. If not, the request is queued in FIFO order and granted as soon as prior executions complete and slots are released.
This means:
- Agent processes are never blocked waiting for sandbox capacity — they queue internally without consuming OS threads.
- Sandbox pressure is capped at exactly the capacity you configure.
- Lease events are observable in the runtime event stream.
Pools are enforced per node, not across the cluster. To scale total concurrent execution, add more nodes rather than increasing per-node capacity beyond what your hardware can support.
The default pool
If you do not configure named pools, MirrorNeuron creates a single pool nameddefault with a capacity of 4 slots. All executor nodes that do not specify a pool draw from this default.
Set the default pool capacity with MIRROR_NEURON_EXECUTOR_MAX_CONCURRENCY:
Named pools
Named pools let you partition execution capacity by workload type. For example, you might want to limit GPU-intensive jobs to one concurrent sandbox while allowing many lightweight I/O jobs to run in parallel. Configure named pools withMIRROR_NEURON_EXECUTOR_POOL_CAPACITIES using a comma-separated list of name=capacity pairs:
MIRROR_NEURON_EXECUTOR_POOL_CAPACITIES is set, it creates or overrides the named pools listed. The default pool is still present unless you explicitly set its capacity to a different value here. Any pool name you include in the variable is the name you reference in your manifest.
Referencing a pool in a manifest
In your workflow manifest, each executor node can declare which pool to draw from and how many slots it needs per execution. Use thepool and pool_slots fields inside the node’s config object:
pool— The name of the executor pool to acquire a lease from. Defaults to"default"if omitted.pool_slots— The number of slots to consume from the pool for each sandbox run. Must be a positive integer and cannot exceed the pool’s total capacity.
Slot accounting
The lease manager trackscapacity, in_use, available, and queued for each pool. You can inspect live pool stats through the monitor or the HTTP API.
The accounting rules are:
- A lease acquires
pool_slotsslots from the named pool. - The slots are held until the executor releases the lease — either on completion, error, or process exit.
- If the owning executor process crashes, the lease manager detects the process exit and releases the slots automatically.
- Queued requests are served in the order they arrived.
Scaling guidance
Because pools are per-node, the total cluster-wide execution capacity is:- Light workloads
- Heavy workloads
- Mixed workloads
Light workloads are short-lived shell scripts or Python functions with low memory and CPU requirements.A 4-node cluster gives you 32 concurrent sandboxes, which handles large fan-out graphs without overloading the OpenShell gateway.
Reference sizing example
From the runtime architecture:| Logical workers | Nodes | Slots per node | Concurrent sandboxes |
|---|---|---|---|
| 1,000 | 4 | 8 | 32 |
Environment variable summary
| Variable | Default | Purpose |
|---|---|---|
MIRROR_NEURON_EXECUTOR_MAX_CONCURRENCY | 4 | Default pool capacity per node |
MIRROR_NEURON_EXECUTOR_POOL_CAPACITIES | — | Named pool capacities, comma-separated name=capacity pairs |