Skip to content

Workflow editor

The workflow editor uses a graph-first design concept to visualize and manage DAGs.

The editor consists of the following parts:

  • Graph view - Display step nodes and dependencies
  • Step list - List all steps and their basic information
  • Code editor - Edit the script code of the selected step
  • Configuration panel - Set step key, name, timeout, etc.

The Add step button in the editor can add steps. Clicking the added step node will enter the step editing mode.

The configurable items are:

  • Step keystepKey,the stable unique identifier of the step, used for dependency reference and upstream output reading.
  • Name:The readable name for UI display.
  • Timeout:Limit the maximum execution time of this step, if it exceeds the timeout, the step will be marked as failed and terminated.
  • Script (ESM):The execution logic code of the step.
  • Right-click on the step to delete
  • Press the backspace key after selecting the step to delete
  • Select one or more nodes in the selection mode, and delete in the context menu of the editor
  • Right-click on the line to disconnect
  • Press the backspace key after selecting the line to delete
  • Right-click on the step to select the dependency to disconnect

Each step uses a ESM JavaScript to define the execution logic. The engine creates an independent attempt for each execution and provides a unified main(env, ctx) entry and context capabilities.

  • Must export main(env, ctx) (recommended to use the default export object).
  • main() must return a JSON serializable object.
  • The return value must contain the outputs field, and the outputs must be an object (can be an empty object).

A correct example of a minimal structure:

export default {
async main(env, ctx) {
return { outputs: {} }
},
}
  • Input:The engine will write input.json to the current attempt and provide the same data view in ctx.input.
  • Upstream:Before the step starts, the engine will read the latest successful output of the dependent step and inject it into ctx.upstream.
  • Output:The return value of main() will be written to output.json. The outputs will be used as the base data source for subsequent steps to read and aggregate the workflow output.
  • ctx.params:The initial input of this run, from Run Inputs
  • ctx.upstream:The upstream step output mapping, structured as { ok, timestamp, data }
  • ctx.artifacts:The registered artifacts, see below
  • ctx.files:The run directory and IO paths (containing attemptDir, inputPath, outputPath)
  • ctx.run:The run metadata { runId, stepKey, attemptNo }
  • ctx.log / ctx.warn / ctx.error:Write to the step log
export default {
async main(env, ctx) {
// Read input parameters
const keyword = String(ctx.params.keyword ?? "").trim()
const category = String(ctx.params.category ?? "").trim()
const language = String(ctx.params.language ?? "").trim() || "zh-CN"
// Validate required parameters
if (!keyword) throw new Error("Missing param: keyword")
if (!category) throw new Error("Missing param: category")
// Read input files
// The system will automatically merge urlFiles / uploadFiles as fields into ctx.params.files
const files = Array.isArray(ctx.params.files) ? ctx.params.files : []
const urlFiles = files.filter((f) => f && typeof f === "object" && f.source === "url")
const uploadFiles = files.filter((f) => f && typeof f === "object" && f.source === "upload")
// Validate the number of file inputs
if (urlFiles.length > 10) throw new Error("Too many urlFiles (max 10)")
if (uploadFiles.length > 6) throw new Error("Too many uploadFiles (max 6)")
return {
outputs: {
keyword,
category,
language,
urlFilesCount: urlFiles.length,
uploadFilesCount: uploadFiles.length,
},
}
},
}

2. Read upstream step parameters and pass all parameters to downstream steps

Section titled “2. Read upstream step parameters and pass all parameters to downstream steps”

There is a workflow with only two steps:

Step 1(step_1)

Step 2(step_2)

Step 1 is the upstream, Step 2 is the downstream of step 1.

export default {
async main(env, ctx) {
// Read input parameters
const keyword = String(ctx.params.keyword ?? "").trim()
const category = String(ctx.params.category ?? "").trim()
const language = String(ctx.params.language ?? "").trim() || "zh-CN"
if (!keyword) throw new Error("Missing param: keyword")
if (!category) throw new Error("Missing param: category")
// Output:Pass all input parameters of this step (including the files field merged by the system)
// to allow downstream or output specification to reference by field
return { outputs: { ...ctx.params, keyword, category, language } }
},
}

When reading the upstream step output using ctx.upstream?.<stepKey> or ctx.upstream?.["<stepKey>"], you can use bracket notation or dot notation to read the value.

3. Read upstream step parameters and pass some parameters to downstream steps

Section titled “3. Read upstream step parameters and pass some parameters to downstream steps”

There is a workflow with the following structure:

Step 1(step_1)

Step 2(step_2)

Step 3(step_3)

Step 4(step_4)

Step 5(step_5)

Step 6(step_6)

This example is used to demonstrate: the upstream writes the necessary fields to outputs, and the downstream reads and continues to pass by field using ctx.upstream.<stepKey>.data.outputs.<field>.

Input entry: read ctx.params, output keyword / limit, for downstream selective reference.

export default {
async main(env, ctx) {
const keyword = String(ctx.params.keyword ?? "").trim()
const limit = Number(ctx.params.limit ?? 10)
if (!keyword) throw new Error("Missing param: keyword")
if (!Number.isFinite(limit) || limit <= 0) throw new Error("Invalid param: limit")
return { outputs: { keyword, limit } }
},
}

In this example, the downstream does not depend on “whole package pass-through”, but combines and passes by field, making it easier to stabilize the connection and output specification to reference by field.

Artifacts will be displayed and downloaded in the Artifacts panel of the run details.

Use ctx.artifacts.writeText(name, text, { kind?, summary? }) to write the UTF-8 text file to the current attempt’s artifacts/ directory and register it as an artifact.

export default {
async main(env, ctx) {
const report = `# Report\nrunId=${ctx.run.runId}\nstep=${ctx.run.stepKey}\n`
await ctx.artifacts.writeText("report.md", report, { kind: "file", summary: "Markdown report" })
return { outputs: { ok: true } }
},
}

Use ctx.artifacts.writeBytes(name, bytes, { kind?, summary?, encoding? }) to write the binary file to the current attempt’s artifacts/ directory and register it as an artifact.

bytes supports Buffer | Uint8Array | ArrayBuffer | string; when bytes is a string, encoding defaults to "base64" (or specified "utf8").

export default {
async main(env, ctx) {
const pngBase64 = String(ctx.params.pngBase64 ?? "")
await ctx.artifacts.writeBytes("image.png", pngBase64, { encoding: "base64", summary: "PNG" })
return { outputs: { hasImage: true } }
},
}

Use ctx.artifacts.registerFile(absPath, { kind?, name?, summary? }) to register the existing file as an artifact.

Due to security restrictions, only files in the current attempt directory are accepted.

import path from "node:path"
import fs from "node:fs/promises"
export default {
async main(env, ctx) {
const p = path.join(ctx.files.dirs.attemptDir, "artifacts", "raw.json")
await fs.mkdir(path.dirname(p), { recursive: true })
await fs.writeFile(p, JSON.stringify({ ts: Date.now() }, null, 2), "utf8")
await ctx.artifacts.registerFile(p, { summary: "Raw payload" })
return { outputs: { rawPath: p } }
},
}

Input and output are the core concepts of the workflow. Input is used to define what parameters the workflow needs, and output is used to define what results the workflow needs to output.

ShortcutFunction
QSwitch to horizontal layout
WSwitch to vertical layout
ESwitch to custom layout
VPan mode
SSelection mode
⌘ + A, Ctrl + ASelect all steps