Skip to main content
When you want to run a workflow in MirrorNeuron, you package it as a job bundle — a directory that contains a declarative graph definition and any code or data files that executor nodes need to run. The bundle is the complete, self-contained description of your workflow. You can validate it locally, version it alongside your application code, and hand it to the runtime through the CLI or the API without any additional setup.

Bundle structure

A valid job bundle is a directory with two top-level entries:
my_job_bundle/
├── manifest.json
└── payloads/
    ├── worker_script.py
    └── data.json
  • manifest.json — the declarative workflow definition. It specifies the agent nodes, the message-routing edges between them, the entrypoints, initial inputs, and job-level policies.
  • payloads/ — a directory containing all scripts, data files, and static assets that executor nodes reference. The runtime resolves source paths in config.uploads relative to this directory.
The payloads/ directory must exist even if your workflow has no executor nodes. You can leave it empty with a .gitkeep file.

The manifest.json schema

The manifest.json file is a JSON document that describes the full execution graph. The runtime parses, normalizes, and validates it before starting any agents.

Top-level fields

FieldTypeRequiredDescription
manifest_versionStringYesThe manifest format version. Use "1.0".
graph_idStringYesA unique identifier for the agent graph.
job_nameStringNoA human-readable name. Defaults to graph_id if omitted.
long_livedBooleanNoSet to true for workflows that run until manually stopped. Defaults to false.
metadataObjectNoArbitrary key-value tags for the job.
entrypointsArrayYes*A list of node_id strings that receive initial input to start the graph.
initial_inputsObjectNoA map of node_id → array of message payloads to seed the job.
nodesArrayYesThe list of agent nodes that make up the workflow.
edgesArrayYesThe list of message-routing edges between nodes.
policiesObjectNoJob-level policies such as recovery_mode.
*If you omit entrypoints, the runtime infers them from any node with "role": "root" or "role": "root_coordinator". At least one entrypoint or one root-role node is required.

Node fields

Each object in the nodes array defines a supervised agent. The runtime starts one BEAM process per node.
FieldTypeRequiredDescription
node_idStringYesA unique identifier for this node within the graph.
agent_typeStringYesThe runtime primitive: "router", "executor", "aggregator", or "sensor".
typeStringNoThe behavioral template: "generic", "map", "reduce", "stream", "batch", or "accumulator". Defaults to "generic".
roleStringNoA human-readable tag for the agent’s domain role, such as "researcher" or "root_coordinator".
configObjectNoConfiguration passed to the agent at startup. Interpreted by the agent_type and type.
tool_bindingsArrayNoTool binding declarations for the node.
retry_policyObjectNoPer-node retry configuration.
checkpoint_policyObjectNoPer-node checkpoint configuration.
spawn_policyObjectNoPer-node spawn configuration.

Edge fields

Each object in the edges array defines how messages flow from one agent to another.
FieldTypeRequiredDescription
from_nodeStringYesThe node_id of the sending agent.
to_nodeStringYesThe node_id of the receiving agent.
message_typeStringYesThe message type that triggers this edge.
edge_idStringNoAn optional identifier for the edge. Used in error messages.
routing_modeStringNoHow the message is delivered. Defaults to "broadcast".
conditionsObjectNoOptional routing conditions.

Policies

The top-level policies object supports the following key:
KeyTypeDescription
recovery_modeStringHow the runtime recovers failed agents. Use "local_restart" for local supervised restarts.

The payloads/ directory

The payloads/ directory is where you store any code or data that executor nodes need to run inside the OpenShell sandbox. When the runtime starts an executor node, it stages the files declared in config.uploads by reading them from payloads/ and uploading them into the sandbox at the paths you specify. Declare uploads on an executor node using config.uploads, an array of objects with source and target keys:
  • source — the path relative to payloads/ on the host
  • target — the absolute path inside the sandbox where the file will be placed
{
  "node_id": "python_worker",
  "agent_type": "executor",
  "config": {
    "uploads": [
      {
        "source": "process_data.py",
        "target": "/sandbox/process_data.py"
      }
    ],
    "command": ["python3", "/sandbox/process_data.py"]
  }
}
In this example, the runtime reads my_job_bundle/payloads/process_data.py from disk and mounts it into the sandbox at /sandbox/process_data.py before running the command.
If a source file listed in config.uploads does not exist in the payloads/ directory, the job will fail at startup. Run ./mirror_neuron validate before deploying to catch missing files early.

Complete manifest example

The following manifest is the full working example from examples/research_flow. It defines a three-node graph: an ingress router fans the input out, a retriever router passes it along, and a reviewer aggregator collects the result and completes the job.
{
  "manifest_version": "1.0",
  "graph_id": "research_flow_v1",
  "job_name": "market-analysis",
  "entrypoints": ["ingress"],
  "initial_inputs": {
    "ingress": [
      {
        "topic": "electric vehicle charging adoption in New England",
        "text": "Collect a short research summary."
      }
    ]
  },
  "nodes": [
    {
      "node_id": "ingress",
      "agent_type": "router",
      "type": "map",
      "role": "root_coordinator",
      "config": {
        "emit_type": "research_request"
      }
    },
    {
      "node_id": "retriever",
      "agent_type": "router",
      "type": "map",
      "role": "researcher"
    },
    {
      "node_id": "reviewer",
      "agent_type": "aggregator",
      "type": "reduce",
      "role": "result_sink",
      "config": {
        "complete_on_message": true
      }
    }
  ],
  "edges": [
    {
      "edge_id": "ingress_to_retriever",
      "from_node": "ingress",
      "to_node": "retriever",
      "message_type": "research_request"
    },
    {
      "edge_id": "retriever_to_reviewer",
      "from_node": "retriever",
      "to_node": "reviewer",
      "message_type": "research_request"
    }
  ],
  "policies": {
    "recovery_mode": "local_restart"
  }
}
This manifest traces a message from ingressretrieverreviewer. The reviewer aggregator completes the job as soon as it receives the first message because complete_on_message is true.

Validating and running a bundle

1

Validate the bundle

Run the validator before submitting the bundle. It checks the manifest schema, verifies that all node IDs in edges and entrypoints match declared nodes, and confirms that every agent_type and type combination is supported.
./mirror_neuron validate examples/research_flow
2

Run the bundle

Submit the bundle to the runtime. The runtime loads the manifest, starts a supervised process for each node, and injects the initial_inputs into the entrypoint nodes.
./mirror_neuron run examples/research_flow
3

Inspect the running job

Use job inspect to check the current state of the job and its agents.
./mirror_neuron job inspect <job_id>
For pure-router workflows that do not run external code, you can omit config.uploads entirely and leave the payloads/ directory empty. The runtime does not upload anything if no executor nodes are present.