Skip to main content
The mirror_neuron binary is the primary interface for interacting with MirrorNeuron — submitting jobs, inspecting runtime state, managing cluster membership, and monitoring live activity. You build it once from the Elixir project, then use it directly from the shell on any node in your deployment.

Build the CLI binary

Build the binary with mix escript.build. This compiles the project and writes a self-contained mirror_neuron executable to the project root.
1

Install dependencies

Fetch all Elixir dependencies before building.
mix deps.get
2

Build the escript binary

Compile and package the CLI into a single executable.
mix escript.build
On success you will see a mirror_neuron file in your project root.
3

Verify the binary

Run any command to confirm the binary works.
./mirror_neuron standalone-start
The binary embeds the Elixir application. You do not need to run mix on the target machine — only Erlang/OTP must be installed there.

Command categories

mirror_neuron commands fall into five categories:
CategoryCommandsPurpose
Serverstandalone-startStart a standalone runtime instance
Clustercluster start/join/discover/status/nodes/leave/rebalance/elect-leader/health/reloadManage distributed cluster membership
Job lifecyclevalidate, run, pause, resume, cancelSubmit and control workflows
Inspectionjob list/inspect, agent list, node list, eventsRead runtime state
Operationsmonitor, bundle reload/check, node add/remove, sendObserve and intervene at runtime

Top-level commands

Start an isolated, standalone runtime server instance. Use this for single-node deployments and local development.
./mirror_neuron standalone-start
Start, join, and manage the peer-to-peer cluster. Covers the full membership lifecycle — from bootstrapping a new cluster to gracefully leaving or rebalancing nodes.
./mirror_neuron cluster start --node-id my-node --bind 127.0.0.1:4000
./mirror_neuron cluster status
./mirror_neuron cluster nodes
See the commands reference for all cluster subcommands.
Check a job bundle before running it. Verifies bundle structure, manifest syntax, and node and edge relationships.
./mirror_neuron validate examples/research_flow
Submit a job bundle for execution. Supports interactive, JSON-output, and detached modes, as well as a configurable timeout.
./mirror_neuron run examples/research_flow
./mirror_neuron run examples/research_flow --json
./mirror_neuron run examples/research_flow --no-await
Open the terminal monitor — a live ops view of your cluster nodes, running jobs, sandbox counts, and recent events. Supports JSON output and multi-node cluster connections.
./mirror_neuron monitor
./mirror_neuron monitor --json
List all jobs, inspect a specific job, or list the agents belonging to a job.
./mirror_neuron job list
./mirror_neuron job inspect <job_id>
List agents for a given job.
./mirror_neuron agent list <job_id>
List cluster nodes or add and remove nodes from the runtime.
./mirror_neuron node list
./mirror_neuron node add <node_name>
./mirror_neuron node remove <node_name>
Stream the event log for a job. Useful for debugging message flow, lease events, and sandbox completion or failure.
./mirror_neuron events <job_id>
Reload or check a registered bundle by its ID.
./mirror_neuron bundle reload <bundle_id>
./mirror_neuron bundle check <bundle_id>
Control a running job’s lifecycle from the CLI.
./mirror_neuron pause <job_id>
./mirror_neuron resume <job_id>
./mirror_neuron cancel <job_id>
Inject a message directly into a specific agent in a running job. Useful for manual testing, sensor-style workflows, and operator intervention.
./mirror_neuron send <job_id> <agent_id> '{"type":"manual_result","payload":{"ok":true}}'

Verbose output

Add -v or --verbose to any command to raise the log level from error to warning, which surfaces additional runtime activity in the terminal.
./mirror_neuron -v run examples/research_flow

Next steps

CLI command reference

Full syntax and examples for every mirror_neuron command.

Running workflows

End-to-end guide to submitting and monitoring a job.

Clustering

Set up a multi-node cluster and manage membership.

Monitoring

Use the terminal monitor for live operational visibility.