Skip to main content
MirrorNeuron reads its runtime configuration entirely from environment variables — there are no config files to manage. Set the variables below before starting the runtime or passing them into your shell session. You only need the variables that apply to your deployment; most have sensible defaults for local use.

Required for all deployments

MIRROR_NEURON_REDIS_URL
string
default:"redis://127.0.0.1:6379/0"
The connection URL for your Redis instance. MirrorNeuron uses Redis for durable job state, snapshots, event history, and cluster leader election. All nodes in a cluster must point to the same Redis instance.
export MIRROR_NEURON_REDIS_URL="redis://192.168.4.29:6379/0"
The Erlang distribution cookie used to authenticate BEAM nodes with each other. Every node in your cluster must use the exact same value. If this value differs between nodes, the cluster will reject connections with an “invalid challenge reply” error.
export MIRROR_NEURON_COOKIE="mirrorneuron"
Keep this value secret. Any node that shares your cookie can join your cluster and observe or control running jobs.

Executor concurrency

MIRROR_NEURON_EXECUTOR_MAX_CONCURRENCY
integer
default:"4"
Sets the capacity of the default executor pool — the maximum number of OpenShell sandboxes that can run simultaneously on a single node. Executor agents that request a lease when the pool is full are queued and released in order as capacity becomes available.
export MIRROR_NEURON_EXECUTOR_MAX_CONCURRENCY="8"
Start with a value between 4 and 8 per node. OpenShell sandbox startup is expensive compared to BEAM process scheduling. See Executor pools for sizing guidance.
MIRROR_NEURON_EXECUTOR_POOL_CAPACITIES
string
Defines named executor pools and their individual capacities. Accepts a comma-separated list of name=capacity pairs. When this variable is set, it extends (and can override) the default pool set by MIRROR_NEURON_EXECUTOR_MAX_CONCURRENCY. Named pools let different workflow nodes compete for separate capacity budgets.
export MIRROR_NEURON_EXECUTOR_POOL_CAPACITIES="default=4,gpu=1,io=8"
Pool names are arbitrary strings. Reference them from executor node config using the pool field. See Executor pools for details.

API

MIRROR_NEURON_API_PORT
integer
default:"4000"
The port on which the HTTP API server listens. Change this when port 4000 is already in use, or when running multiple nodes on the same machine.
export MIRROR_NEURON_API_PORT="4001"
If port 4000 is taken on startup, MirrorNeuron skips binding the API rather than crashing. Set an explicit port if you need the API to be reliably reachable.

OpenShell

MIRROR_NEURON_OPENSHELL_BIN
string
The absolute path to the openshell binary. Set this when openshell is not on your PATH. If not set, MirrorNeuron looks for openshell in your PATH.
export MIRROR_NEURON_OPENSHELL_BIN="$HOME/.local/bin/openshell"

Clustering

MIRROR_NEURON_CLUSTER_NODES
string
A comma-separated list of Erlang node addresses that this node should connect to on startup. MirrorNeuron uses libcluster with the EPMD strategy to discover and join peers. Leave this unset for single-node operation.
export MIRROR_NEURON_CLUSTER_NODES="mn1@192.168.4.29,mn2@192.168.4.35"
Each address must use the name@host format where name is the Erlang node name and host is a resolvable hostname or IP address. All nodes in the cluster must list each other.
ERL_AFLAGS
string
Erlang VM flags passed at startup. Use this to pin the BEAM distribution port to a fixed range, which makes firewall rules and cluster failure analysis easier in development and production.
export ERL_AFLAGS="-kernel inet_dist_listen_min 4370 inet_dist_listen_max 4370"
MIRROR_NEURON_DIST_PORT
integer
Companion to ERL_AFLAGS. Set this to the fixed distribution port so that cluster scripts and tooling know which port to target.
export MIRROR_NEURON_DIST_PORT="4370"
Using a fixed distribution port (for example, 4370) is strongly recommended for any multi-node deployment. Random dynamic ports make it hard to configure firewalls and diagnose split-brain failures.

LLM examples

GEMINI_API_KEY
string
Your Google Gemini API key. Required only when running LLM-based example workflows. MirrorNeuron passes this into sandbox environments so worker code can call Gemini APIs.
export GEMINI_API_KEY="your-api-key-here"

Minimal local setup

The following exports are enough to run MirrorNeuron locally against a Redis container:
export MIRROR_NEURON_REDIS_URL="redis://127.0.0.1:6379/0"
export MIRROR_NEURON_EXECUTOR_MAX_CONCURRENCY="4"
export MIRROR_NEURON_COOKIE="mirrorneuron"

Cluster setup example

For a two-node cluster, set these variables identically on each box and update the IP addresses:
export MIRROR_NEURON_COOKIE="mirrorneuron"
export MIRROR_NEURON_CLUSTER_NODES="mn1@192.168.4.29,mn2@192.168.4.35"
export MIRROR_NEURON_REDIS_URL="redis://192.168.4.29:6379/0"
export ERL_AFLAGS="-kernel inet_dist_listen_min 4370 inet_dist_listen_max 4370"
export MIRROR_NEURON_DIST_PORT="4370"
See the Clustering guide for the full two-node setup walkthrough.