Skip to main content
MirrorNeuron gives you a production-ready runtime for coordinating many agents at once, without sacrificing isolation or predictability. It runs on the BEAM, which means you get fault-tolerant, message-driven orchestration out of the box, while heavy execution stays contained in sandboxed processes outside the runtime kernel.

What MirrorNeuron does

MirrorNeuron is built for event-driven, message-oriented workflows where logical agents collaborate and only the heavy execution path leaves BEAM. It is not a general-purpose batch scheduler. Instead, it gives you a small, composable set of primitives and templates that you wire together through manifest-driven graph bundles. Key capabilities:
  • Orchestrate multi-agent workflows using a minimal set of built-in primitives
  • Scale execution capacity through executor leases and pools, not by spawning one sandbox per worker
  • Persist job state, agent snapshots, and event history in Redis
  • Run workflows on a single machine or across a BEAM cluster with libcluster and Horde
  • Monitor and control running jobs from the terminal with mirror_neuron monitor

Two-layer architecture

MirrorNeuron keeps a strict boundary between two concerns: BEAM layer — handles orchestration, supervision, message routing, clustering, and persistence. Logical workers are cheap BEAM processes that hold workflow state. They live inside the runtime and are supervised by OTP. OpenShell layer — handles isolated execution for executor nodes. When a workflow step needs to run untrusted code or a shell command, the executor acquires a lease on an OpenShell sandbox. Sandboxes are reused per job per runtime node, which keeps the cost of execution bounded. This split is the reason MirrorNeuron scales better than runtimes that launch one sandbox for every worker immediately.

Runtime primitives

The built-in primitive set is intentionally small:
PrimitiveRole
routerDirects messages between agents according to manifest-defined edges
executorAcquires an execution lease and runs payloads inside an OpenShell sandbox
aggregatorCollects and merges results from multiple upstream agents
sensorListens for external events and injects them into the workflow
Domain-specific agent logic belongs in job bundles or user extensions, not in the runtime kernel.

Agent templates

Each node in a workflow manifest selects a behavioral template through the type field. The available templates are:
  • generic — default, general-purpose agent behavior
  • stream — processes a continuous stream of messages
  • map — applies a transformation to each input message independently
  • reduce — accumulates messages and emits a single output
  • batch — collects messages into batches before processing
  • accumulator — builds up state across messages over time
Templates give you reusable workflow behaviors without putting business logic into the runtime.

Workflow bundles

Workflows are defined as graph bundles on disk:
job-folder/
  manifest.json
  payloads/
manifest.json defines nodes, edges, entrypoints, and policies. The agent_type field selects the runtime primitive; the type field selects the behavioral template. payloads/ contains the code and files that executor nodes need at runtime.
MirrorNeuron validates your manifest before running it. Use ./mirror_neuron validate <job-folder> to catch structural errors before you commit to a full run.

What’s included

MirrorNeuron ships with runnable example bundles to help you explore the runtime:
  • research_flow — a simple multi-step research workflow, ideal for getting started
  • openshell_worker_demo — shell and Python execution with a bundle-scoped policy file
  • prime_sweep_scale — large fan-out scale testing
  • streaming_peak_demo — streaming telemetry and anomaly detection
  • llm_codegen_review — LLM code generation and review loops
  • mpe_simple_push_visualization — shared PettingZoo MPE crowd visualization
  • ecosystem_simulation — large-scale ecosystem simulation

Where to go next

Install MirrorNeuron

Set up Elixir, Redis, OpenShell, and build the CLI binary on your machine.

Quickstart

Validate and run your first workflow in a few commands.

CLI reference

Full reference for every mirror_neuron command and flag.

API reference

Public inspection and control APIs for monitoring and external integrations.