Skip to main content
MirrorNeuron separates lightweight BEAM agent processes from expensive OpenShell sandbox executions. You can hold thousands of logical agent workers in memory at low cost, but each sandbox run consumes real OS resources. Executor pools give you explicit control over how many sandboxes run at the same time on each node, so large fan-out workflows don’t overwhelm your OpenShell gateway or your hosts.

How executor pools work

Every runtime node starts a LeaseManager process that owns a set of named pools. Each pool has a fixed capacity — the maximum number of sandbox slots it can grant simultaneously. When an executor agent is ready to run worker code, it requests a lease from the pool for a given number of slots. If capacity is available, the lease is granted immediately. If not, the request is queued in FIFO order and granted as soon as prior executions complete and slots are released. This means:
  • Agent processes are never blocked waiting for sandbox capacity — they queue internally without consuming OS threads.
  • Sandbox pressure is capped at exactly the capacity you configure.
  • Lease events are observable in the runtime event stream.
Pools are enforced per node, not across the cluster. To scale total concurrent execution, add more nodes rather than increasing per-node capacity beyond what your hardware can support.

The default pool

If you do not configure named pools, MirrorNeuron creates a single pool named default with a capacity of 4 slots. All executor nodes that do not specify a pool draw from this default. Set the default pool capacity with MIRROR_NEURON_EXECUTOR_MAX_CONCURRENCY:
export MIRROR_NEURON_EXECUTOR_MAX_CONCURRENCY="8"
This is the simplest configuration and works well for most single-node deployments.

Named pools

Named pools let you partition execution capacity by workload type. For example, you might want to limit GPU-intensive jobs to one concurrent sandbox while allowing many lightweight I/O jobs to run in parallel. Configure named pools with MIRROR_NEURON_EXECUTOR_POOL_CAPACITIES using a comma-separated list of name=capacity pairs:
export MIRROR_NEURON_EXECUTOR_MAX_CONCURRENCY="4"
export MIRROR_NEURON_EXECUTOR_POOL_CAPACITIES="default=4,gpu=1,io=8"
When MIRROR_NEURON_EXECUTOR_POOL_CAPACITIES is set, it creates or overrides the named pools listed. The default pool is still present unless you explicitly set its capacity to a different value here. Any pool name you include in the variable is the name you reference in your manifest.
If you reference a pool name in a manifest that does not exist on the node, the executor will fail to acquire a lease at runtime. Make sure every pool name used in your manifests is declared in MIRROR_NEURON_EXECUTOR_POOL_CAPACITIES on every node.

Referencing a pool in a manifest

In your workflow manifest, each executor node can declare which pool to draw from and how many slots it needs per execution. Use the pool and pool_slots fields inside the node’s config object:
{
  "agent_type": "executor",
  "config": {
    "pool": "default",
    "pool_slots": 1
  }
}
  • pool — The name of the executor pool to acquire a lease from. Defaults to "default" if omitted.
  • pool_slots — The number of slots to consume from the pool for each sandbox run. Must be a positive integer and cannot exceed the pool’s total capacity.
An executor that needs exclusive access to a single-slot pool — for example, a node that runs a GPU workload — would look like this:
{
  "agent_type": "executor",
  "config": {
    "pool": "gpu",
    "pool_slots": 1
  }
}
Keep pool_slots at 1 for most executor nodes. Use a higher value only when a single job genuinely requires multiple concurrent sandbox slots, for example a batch executor that fans out within its own execution.

Slot accounting

The lease manager tracks capacity, in_use, available, and queued for each pool. You can inspect live pool stats through the monitor or the HTTP API. The accounting rules are:
  • A lease acquires pool_slots slots from the named pool.
  • The slots are held until the executor releases the lease — either on completion, error, or process exit.
  • If the owning executor process crashes, the lease manager detects the process exit and releases the slots automatically.
  • Queued requests are served in the order they arrived.
This means you never leak slots due to agent crashes or unclean shutdowns.

Scaling guidance

Because pools are per-node, the total cluster-wide execution capacity is:
total concurrent sandboxes = pool_capacity × number_of_nodes
Use the following as a starting point for sizing decisions:
Light workloads are short-lived shell scripts or Python functions with low memory and CPU requirements.
# Per node
export MIRROR_NEURON_EXECUTOR_MAX_CONCURRENCY="8"
A 4-node cluster gives you 32 concurrent sandboxes, which handles large fan-out graphs without overloading the OpenShell gateway.

Reference sizing example

From the runtime architecture:
Logical workersNodesSlots per nodeConcurrent sandboxes
1,0004832
1,000 agents can be in-flight across the cluster while only 32 sandboxes run at any point. The remaining agents queue cheaply inside the BEAM runtime until capacity opens up.

Environment variable summary

VariableDefaultPurpose
MIRROR_NEURON_EXECUTOR_MAX_CONCURRENCY4Default pool capacity per node
MIRROR_NEURON_EXECUTOR_POOL_CAPACITIESNamed pool capacities, comma-separated name=capacity pairs
See the full environment variable reference for all runtime configuration options.