What MirrorNeuron does
MirrorNeuron is built for event-driven, message-oriented workflows where logical agents collaborate and only the heavy execution path leaves BEAM. It is not a general-purpose batch scheduler. Instead, it gives you a small, composable set of primitives and templates that you wire together through manifest-driven graph bundles. Key capabilities:- Orchestrate multi-agent workflows using a minimal set of built-in primitives
- Scale execution capacity through executor leases and pools, not by spawning one sandbox per worker
- Persist job state, agent snapshots, and event history in Redis
- Run workflows on a single machine or across a BEAM cluster with
libclusterandHorde - Monitor and control running jobs from the terminal with
mirror_neuron monitor
Two-layer architecture
MirrorNeuron keeps a strict boundary between two concerns: BEAM layer — handles orchestration, supervision, message routing, clustering, and persistence. Logical workers are cheap BEAM processes that hold workflow state. They live inside the runtime and are supervised by OTP. OpenShell layer — handles isolated execution forexecutor nodes. When a workflow step needs to run untrusted code or a shell command, the executor acquires a lease on an OpenShell sandbox. Sandboxes are reused per job per runtime node, which keeps the cost of execution bounded.
This split is the reason MirrorNeuron scales better than runtimes that launch one sandbox for every worker immediately.
Runtime primitives
The built-in primitive set is intentionally small:| Primitive | Role |
|---|---|
router | Directs messages between agents according to manifest-defined edges |
executor | Acquires an execution lease and runs payloads inside an OpenShell sandbox |
aggregator | Collects and merges results from multiple upstream agents |
sensor | Listens for external events and injects them into the workflow |
Agent templates
Each node in a workflow manifest selects a behavioral template through thetype field. The available templates are:
generic— default, general-purpose agent behaviorstream— processes a continuous stream of messagesmap— applies a transformation to each input message independentlyreduce— accumulates messages and emits a single outputbatch— collects messages into batches before processingaccumulator— builds up state across messages over time
Workflow bundles
Workflows are defined as graph bundles on disk:manifest.json defines nodes, edges, entrypoints, and policies. The agent_type field selects the runtime primitive; the type field selects the behavioral template. payloads/ contains the code and files that executor nodes need at runtime.
MirrorNeuron validates your manifest before running it. Use
./mirror_neuron validate <job-folder> to catch structural errors before you commit to a full run.What’s included
MirrorNeuron ships with runnable example bundles to help you explore the runtime:- research_flow — a simple multi-step research workflow, ideal for getting started
- openshell_worker_demo — shell and Python execution with a bundle-scoped policy file
- prime_sweep_scale — large fan-out scale testing
- streaming_peak_demo — streaming telemetry and anomaly detection
- llm_codegen_review — LLM code generation and review loops
- mpe_simple_push_visualization — shared PettingZoo MPE crowd visualization
- ecosystem_simulation — large-scale ecosystem simulation
Where to go next
Install MirrorNeuron
Set up Elixir, Redis, OpenShell, and build the CLI binary on your machine.
Quickstart
Validate and run your first workflow in a few commands.
CLI reference
Full reference for every
mirror_neuron command and flag.API reference
Public inspection and control APIs for monitoring and external integrations.