Skip to main content
MirrorNeuron gives you a fast, fault-tolerant runtime for defining and running multi-agent workflows. You describe your workflow as a graph of agents in a manifest.json file, then run it with a single CLI command. The BEAM runtime handles orchestration, message routing, clustering, and persistence — while worker code executes in isolated OpenShell sandboxes.

Quick Start

Install MirrorNeuron and run your first workflow in minutes.

Job Bundles

Learn how to define workflows using manifest.json and payloads.

CLI Reference

Explore all CLI commands for running, monitoring, and managing jobs.

HTTP API

Integrate MirrorNeuron into your tooling with the REST API.

How it works

MirrorNeuron is built around a two-layer model:
  • Orchestration layer — The BEAM runtime supervises long-lived agent processes, routes messages, handles retries, and persists job state to Redis.
  • Execution layer — Worker code (shell scripts, Python, etc.) runs in isolated OpenShell sandboxes. Execution capacity is managed through bounded lease pools, so large fan-out workflows don’t overload your infrastructure.
1

Install MirrorNeuron

Install Elixir, Redis, and OpenShell, then build the CLI binary.
mix deps.get && mix escript.build
2

Define a workflow

Create a job bundle folder with a manifest.json defining your agent graph and a payloads/ directory for worker code.
3

Validate and run

Use the CLI to validate your bundle and submit it for execution.
./mirror_neuron validate examples/research_flow
./mirror_neuron run examples/research_flow
4

Monitor in real time

Open the terminal monitor to watch jobs, agents, and sandbox activity live.
./mirror_neuron monitor

Key features

Runtime Primitives

Four built-in agent types — router, executor, aggregator, sensor — cover most workflow patterns.

Cluster Support

Run distributed workflows across multiple nodes with automatic leader election and job failover.

Executor Pools

Control sandbox concurrency with named executor pools and slot accounting.

Example Workflows

Ready-to-run examples from simple routing to LLM codegen loops and scale benchmarks.