Skip to main content
Once MirrorNeuron is installed and your environment variables are set, you can have a workflow running in under five minutes. This guide uses the research_flow example bundle that ships with MirrorNeuron — it is a simple multi-step research workflow that exercises the core runtime without requiring any external services beyond Redis.
Complete the installation guide before following these steps. You need the mirror_neuron binary built and Redis running.
1

Validate the workflow

Before running a workflow, validate its manifest to catch any structural errors:
./mirror_neuron validate examples/research_flow
The validate command:
  • loads the job bundle folder
  • validates manifest.json
  • checks node, edge, and entrypoint structure
A clean result means the manifest is well-formed and the runtime can proceed to execute it.
2

Run the workflow

Start the workflow:
./mirror_neuron run examples/research_flow
You will see:
  • a CLI banner identifying the job
  • a live progress view as agents move through the graph
  • a final run summary when the workflow completes
If you need machine-readable output — for scripts, CI pipelines, or external integrations — add the --json flag:
./mirror_neuron run examples/research_flow --json
JSON mode emits structured output to stdout instead of the interactive progress view.
3

Inspect the runtime

List the nodes visible to this MirrorNeuron process:
./mirror_neuron node list
On a single machine this usually shows one node. In a cluster, all connected nodes appear here.To inspect the jobs the runtime knows about:
./mirror_neuron job list
Add --live to filter for currently running jobs:
./mirror_neuron job list --live
To inspect a specific job by its ID:
./mirror_neuron job inspect <job_id>
To view the event history for a job:
./mirror_neuron events <job_id>
4

Open the terminal monitor

The terminal monitor gives you a live dashboard of the entire runtime:
./mirror_neuron monitor
From inside the monitor you can:
  • see all running and recent jobs
  • see cluster nodes and their status
  • open a specific job
  • inspect individual agents, sandboxes, and recent events in real time
Run ./mirror_neuron monitor --json to get a machine-readable snapshot of the monitor state — useful for operational scripts or building external dashboards.

What just happened

When you ran research_flow, MirrorNeuron:
  1. Loaded the job bundle and validated the manifest graph
  2. Started long-lived BEAM processes (logical workers) for each agent node
  3. Routed messages between agents according to the manifest edges
  4. Persisted job state and events to Redis throughout execution
  5. Printed a summary when all agents reached terminal states
The executor nodes in the graph acquired leases on OpenShell sandboxes for any isolated execution steps, then released those leases when done — keeping execution capacity bounded.

Explore more examples

MirrorNeuron ships with additional example bundles that demonstrate more runtime capabilities:
./mirror_neuron validate examples/openshell_worker_demo
./mirror_neuron run examples/openshell_worker_demo --json
The OpenShell demo uses shell and Python execution with a bundle-scoped policy file and an aggregator sink. The LLM example runs three rounds of code generation, review, and regeneration followed by a validator — it requires a GEMINI_API_KEY in your environment.
The LLM example calls Gemini 2.5 Flash Lite and will consume API quota. Make sure GEMINI_API_KEY is set before running it.

Next steps

CLI guide

Full reference for every mirror_neuron command, flag, and output format.

Examples guide

Walk through each bundled example and learn what runtime features it demonstrates.

Monitor guide

Learn how to navigate the terminal monitor and use it for operational visibility.

Execution model

Understand the two-layer model, message routing, and how execution leases work.