API Workflow Automation: Testing Multi-Step Business Logic
Real API behavior is not a single request and response. It is a workflow: authenticate, create a resource, verify it exists, update it, delete it, confirm it is gone. Each step depends on the last. A Flow is the structure that makes these multi-step workflows testable, repeatable, and automatable.
Single-endpoint API tests verify that one request returns the right response. That catches syntax errors and basic contract violations. It does not catch the bugs that matter most: broken authentication chains, create-then-read inconsistencies, failed cascading deletes, or race conditions between dependent operations.
Workflow testing is the practice of testing sequences of API calls where each step produces data that the next step consumes. The test validates the entire business path, not just individual endpoints. In DevTools, this is called a Flow: a directed graph of requests, validations, conditions, and loops that execute in dependency order.
This guide covers what workflow testing is, the common patterns that cover most real-world APIs, how data passes between steps, how to handle branching and errors, and how to build workflows both visually and as YAML.
How to Build an End-to-End API Test: Login, Create, Verify, Delete
Step-by-step tutorial building a 6-step flow with auth, CRUD, validation, and cleanup. Full YAML and visual examples.
HAR to YAML API Test Flow: Auto-Extract Variables + Chain Requests
Turn real browser traffic into a deterministic YAML flow with auto-mapped tokens, IDs, and explicit request chaining.
What is workflow testing?
Workflow testing validates a complete business operation across multiple API calls. The key distinction from single-endpoint testing: data flows between steps. The response from step 1 feeds the request in step 2. If any link in the chain breaks, the workflow fails.
Single-endpoint vs workflow testing
| Dimension | Single-endpoint test | Workflow test (Flow) |
|---|---|---|
| Scope | One request, one response | Multiple requests, chained data |
| Auth | Hardcoded token or pre-configured | Login step produces token, downstream steps use it |
| Data | Static fixture | Generated IDs, tokens, cursors flow between steps |
| Cleanup | Manual or external | Delete step at the end of the flow |
| Bugs caught | Contract violations, status codes | Chain breaks, auth failures, consistency bugs, cascade errors |
What a Flow looks like
A Flow is a directed graph where each node is either a request, a JavaScript validation, a condition, or a loop. Edges between nodes represent data dependencies: "this step needs the output of that step." In YAML, this is expressed with depends_on and variable references like {{Auth.token}} or {{CreateOrder.response.body.id}}.
# A minimal workflow: authenticate → create → verify → cleanup
Login → Auth (extract token)
↓
CreateOrder (uses Auth.token)
→ ValidateCreate (extract orderId)
↓
GetOrder (uses Auth.token, orderId)
→ VerifyRead (validate response)
↓
DeleteOrder (uses Auth.token, orderId)
→ ConfirmGone (verify 404)Real-world workflow patterns
Most API workflows fall into a handful of patterns. Each pattern has a consistent structure: authenticate, act, validate, and clean up. The differences are in what happens between auth and cleanup.
Pattern 1: CRUD lifecycle (the foundation)
The most common workflow: create a resource, read it back, update it, delete it, confirm it is gone. This covers 80% of API testing needs.
workspace_name: Order Lifecycle
env:
BASE_URL: '{{BASE_URL}}'
flows:
- name: OrderCRUD
steps:
- request:
name: Login
method: POST
url: '{{BASE_URL}}/auth/login'
headers:
Content-Type: application/json
body:
email: '{{#env:TEST_EMAIL}}'
password: '{{#env:TEST_PASSWORD}}'
- js:
name: Auth
code: |
export default function(ctx) {
if (ctx.Login?.response?.status !== 200) throw new Error("Login failed");
return { token: ctx.Login.response.body.access_token };
}
depends_on: Login
- request:
name: CreateOrder
method: POST
url: '{{BASE_URL}}/api/orders'
headers:
Authorization: 'Bearer {{Auth.token}}'
Content-Type: application/json
body:
items:
- sku: 'WIDGET-001'
qty: 2
depends_on: Auth
- js:
name: Order
code: |
export default function(ctx) {
const resp = ctx.CreateOrder?.response;
if (resp?.status !== 201) throw new Error("Expected 201");
if (!resp?.body?.id) throw new Error("Missing order ID");
return { id: resp.body.id };
}
depends_on: CreateOrder
- request:
name: GetOrder
method: GET
url: '{{BASE_URL}}/api/orders/{{Order.id}}'
headers:
Authorization: 'Bearer {{Auth.token}}'
depends_on: Order
- js:
name: VerifyRead
code: |
export default function(ctx) {
if (ctx.GetOrder?.response?.status !== 200) throw new Error("Read failed");
if (ctx.GetOrder?.response?.body?.items?.length !== 1) throw new Error("Items mismatch");
return { verified: true };
}
depends_on: GetOrder
- request:
name: DeleteOrder
method: DELETE
url: '{{BASE_URL}}/api/orders/{{Order.id}}'
headers:
Authorization: 'Bearer {{Auth.token}}'
depends_on: VerifyRead
- request:
name: VerifyGone
method: GET
url: '{{BASE_URL}}/api/orders/{{Order.id}}'
headers:
Authorization: 'Bearer {{Auth.token}}'
depends_on: DeleteOrder
- js:
name: ConfirmDeleted
code: |
export default function(ctx) {
if (ctx.VerifyGone?.response?.status !== 404) throw new Error("Still exists");
return { deleted: true };
}
depends_on: VerifyGoneTwo variables flow through the entire chain: {{Auth.token}} for authorization and {{Order.id}} for the resource. Every step validates its response before the next step runs. The delete + verify-gone ensures cleanup.
For a detailed walkthrough of each step, see: How to Build an End-to-End API Test: Login, Create, Verify, Delete.
Pattern 2: Payment flow (multi-resource orchestration)
Payment workflows touch multiple resources: create a cart, add items, apply a discount, submit payment, verify the order. Each step produces IDs or tokens consumed by later steps.
# Payment flow structure (simplified)
steps:
- request: { name: Login, ... }
- js: { name: Auth, ..., depends_on: Login }
- request: { name: CreateCart, ..., depends_on: Auth }
- js: { name: Cart, ..., depends_on: CreateCart } # extract cartId
- request: { name: AddItem, ..., depends_on: Cart } # uses cartId
- request: { name: ApplyDiscount, ..., depends_on: AddItem }
- request: { name: SubmitPayment, ..., depends_on: ApplyDiscount }
- js: { name: Payment, ..., depends_on: SubmitPayment } # extract paymentId
- request: { name: GetReceipt, ..., depends_on: Payment } # uses paymentId
- js: { name: VerifyReceipt, ..., depends_on: GetReceipt }The chain is longer but the principle is identical: each step references outputs from previous steps. The key variables are Auth.token, Cart.cartId, and Payment.paymentId.
Pattern 3: User onboarding (multi-API orchestration)
Onboarding workflows often span multiple APIs: create user in auth service, provision resources in a second service, send a welcome email via a third. The test validates that all three services coordinate correctly.
# Multi-service onboarding
steps:
- request: { name: AdminLogin, ... }
- js: { name: AdminAuth, ..., depends_on: AdminLogin }
- request: { name: CreateUser, ..., depends_on: AdminAuth }
- js: { name: NewUser, ..., depends_on: CreateUser } # extract userId
- request: { name: ProvisionWorkspace, ..., depends_on: NewUser } # uses userId
- js: { name: Workspace, ..., depends_on: ProvisionWorkspace } # extract workspaceId
- request: { name: CheckEmail, ..., depends_on: Workspace } # verify welcome email sent
- request: { name: UserLogin, ..., depends_on: Workspace } # new user can log in
- js: { name: VerifyAccess, ..., depends_on: UserLogin } # verify workspace accessPattern 4: Webhook chain (async verification)
For async operations (webhook delivery, job processing, event propagation), the workflow triggers an action and then polls until the expected outcome appears.
# Webhook delivery verification
steps:
- request: { name: TriggerEvent, ..., depends_on: Auth }
- for:
name: PollDelivery
iter_count: 10
loop: CheckDeliveryStatus
depends_on: TriggerEvent
- js:
name: VerifyDelivered
code: |
export default function(ctx) {
const status = ctx.CheckDeliveryStatus?.response?.body?.status;
if (status !== 'delivered') throw new Error("Not delivered: " + status);
return { delivered: true };
}
depends_on: PollDeliveryThe for loop polls up to 10 times. When the delivery status changes to "delivered," the validation passes. If it never appears, the loop exhausts and the verification fails.
Data passing between steps
Data passing is what makes a collection of requests into a workflow. Without it, each request is independent. With it, the output of one step feeds the input of the next.
Three ways to pass data
| Method | Syntax | Best for |
|---|---|---|
| Direct response reference | {{Login.response.body.token}} | Simple values from a previous response |
| JS node extraction | {{Auth.token}} | Validated/transformed values, cleaner downstream references |
| Environment variable | {{#env:VAR_NAME}} | Secrets, base URLs, CI-injected configuration |
Direct reference vs JS extraction
Direct references ({{Login.response.body.access_token}}) are simpler but longer. JS extraction nodes ({{Auth.token}}) are better when you need to:
- Validate before passing: check that the value exists and is the right type before downstream steps use it
- Transform the value: extract a nested field, combine values, or compute a derived value
- Simplify downstream references:
{{Auth.token}}is easier to read than{{Login.response.body.access_token}}when used 5 times
# JS extraction node: validates and simplifies the reference
- js:
name: Auth
code: |
export default function(ctx) {
const token = ctx.Login?.response?.body?.access_token;
if (!token) throw new Error("No token in login response");
return { token };
}
depends_on: Login
# Downstream steps use the clean reference
- request:
name: CreateOrder
headers:
Authorization: 'Bearer {{Auth.token}}'
depends_on: Auth
- request:
name: GetProfile
headers:
Authorization: 'Bearer {{Auth.token}}'
depends_on: AuthThe dependency graph
Every depends_on declaration and variable reference creates an edge in the dependency graph. The runtime resolves this graph and executes steps in topological order: a step runs only when all its dependencies have completed.
This means independent steps can run in parallel. If step C depends on A but not B, and step D depends on B but not A, then C and D can execute simultaneously. The graph determines the execution order, not the position in the YAML file.
For the complete variable syntax reference including auto-mapping from HAR imports, see: HAR to YAML: Auto-Extract Variables + Chain Requests.
Conditional branching and error handling
Not every workflow is a straight line. Real APIs return different statuses for the same operation (201 vs 409 for idempotent creates), require different paths based on response data, and need graceful handling when steps fail.
Conditional branching with if nodes
The if: step type routes execution based on a condition:
# Handle idempotent create: 201 (new) vs 409 (already exists)
- request:
name: CreateItem
method: POST
url: '{{BASE_URL}}/api/items'
headers:
Authorization: 'Bearer {{Auth.token}}'
Content-Type: application/json
body:
name: 'test-item-{{run_id}}'
depends_on: Auth
- if:
name: CheckCreate
condition: CreateItem.response.status == 201
then: UseNewItem
else: FetchExisting
depends_on: CreateItem
# Path A: item was created
- js:
name: UseNewItem
code: |
export default function(ctx) {
return { itemId: ctx.CreateItem.response.body.id };
}
# Path B: item already exists, fetch it
- request:
name: FetchExisting
method: GET
url: '{{BASE_URL}}/api/items?name=test-item-{{run_id}}'
headers:
Authorization: 'Bearer {{Auth.token}}'
- js:
name: UseExisting
code: |
export default function(ctx) {
return { itemId: ctx.FetchExisting.response.body.items[0].id };
}
depends_on: FetchExistingLoops for polling and iteration
Two loop types handle repetitive operations:
for:— repeat a step N times. Use for polling a status endpoint or retrying an operation with backoff.for_each:— iterate over a list of items. Use for processing batch results, creating multiple resources, or validating each item in a collection.
# Poll until a job completes (max 10 attempts)
- for:
name: WaitForJob
iter_count: 10
loop: CheckJobStatus
depends_on: StartJob
# Process each item in a list response
- for_each:
name: ValidateItems
items: ListItems.response.body.items
loop: ValidateSingleItem
depends_on: ListItemsError handling: fail fast, fail clear
The best error handling in workflow tests is not try/catch. It is fail-fast validation at every step:
- Validate after every important request: a JS node that checks status and required fields. If it throws, the flow stops and the error message tells you exactly which step failed and why.
- Use meaningful error messages:
throw new Error("Expected 201, got " + resp?.status)is actionable.throw new Error("fail")is not. - Always clean up: if your flow creates test data, add a delete step at the end. If the flow fails mid-way, run a separate cleanup flow or use unique resource names that can be garbage-collected.
Building workflows visually
Writing YAML by hand works for simple workflows. For flows with 10+ steps, conditional branches, and loops, a visual editor is faster. DevTools Flows provides a canvas where you drag request nodes, draw dependency edges, and configure each step. The visual graph and the YAML are two representations of the same flow. Edit either one.
The visual workflow
- Create a Flow: open DevTools Studio, click New Flow. You start with a canvas and a Start node.
- Add request nodes: drag HTTP Request nodes onto the canvas for each API call in your workflow.
- Connect them: draw edges from each node to the next. This sets execution order via
depends_on. - Configure requests: click each node, fill in method, URL, headers, body. Variable references like
{{Login.response.body.access_token}}create data dependencies automatically. - Add validation: insert JS nodes between steps to validate responses and extract values. Or use
ifnodes for status-based routing. - Run: click Run Flow. Each node shows green (pass) or red (fail). Click any node to inspect its request, response, and validation output.
- Export to YAML: right-click → Export to YAML. The result is commit-ready for Git and CI.
Starting from real traffic
Instead of building from scratch, import a HAR file from a real browser session. DevTools converts the HTTP traffic into a Flow with auto-mapped variables. You review the generated graph, delete noise (analytics, third-party requests), and refine the variable references.
This is the fastest path from "I have a working workflow in the browser" to "I have an automated test in Git." For the full pipeline, see: HAR to YAML: Auto-Extract Variables + Chain Requests.
Running in CI
Once the YAML is committed, run it in CI like any other test:
# Run locally
devtools flow run tests/workflows/order-lifecycle.yaml
# Run in GitHub Actions with JUnit reporting
devtools flow run tests/workflows/*.yaml --report junitFor the full CI setup with parallel runs, caching, and artifact uploads, see: API Testing in CI/CD: From Manual Clicks to Automated Pipelines.
Common pitfalls
Missing extraction nodes
If a request produces data you need downstream (a token, an ID), you must add a JS node to extract it. Subsequent steps reference the JS node's output, not the raw response. Missing this step means the variable is undefined and the flow fails with a cryptic error several steps later.
Validating only at the end
If you only validate at the last step, a failure in step 2 produces a confusing error in step 6. Add a validation node after each important request. Failures should be immediately attributable to the step that broke.
Hardcoded IDs and tokens
Replacing {{Order.id}} with a hardcoded UUID makes the test pass once and break everywhere else. Always extract IDs from the response that created them. Always obtain tokens from the login step.
Skipping cleanup
If your flow creates data and does not clean up, repeated runs accumulate garbage. List endpoints slow down, unique constraints fail, and tests break for environmental reasons. Always delete what you created.
FAQ
What is the difference between a workflow test and an end-to-end test?
They overlap significantly. An end-to-end API test validates a complete user journey (login, action, verify). A workflow test is the same concept but emphasizes the multi-step structure: data passing, branching, error handling, and cleanup as first-class concerns. In DevTools, both are built as Flows.
How many steps should a workflow have?
Most useful workflows have 4-12 steps. Fewer than 4 means you are probably testing a single endpoint, not a workflow. More than 15-20 becomes hard to debug when something fails. Split large workflows into focused flows that test one business path each.
Can workflows handle async operations like webhooks?
Yes, using polling patterns. After triggering an async operation, add a for loop that polls a status endpoint until the expected state appears or a timeout is reached. For webhook testing, poll the webhook delivery log or use a test endpoint that records deliveries.
Should I build workflows visually or write YAML by hand?
Both work. The visual editor is faster for initial flow creation and for complex branching. YAML editing is faster for small tweaks and refactoring. The visual graph and YAML are two views of the same flow — edit either one. Many teams build visually and refine in YAML.
How do I test the same workflow against different environments?
Parameterize the base URL and credentials using environment variables. In YAML, use {{#env:VAR_NAME}} to read from OS environment. In CI, inject different values per environment via your platform secret store. The workflow definition stays the same; only the environment changes.
Start with one workflow
You do not need a comprehensive workflow suite on day one. Pick the single most important business path in your API: likely auth + CRUD for your core resource. Build it as a Flow, run it locally, export to YAML, and add it to CI. That single workflow, running on every PR, catches more bugs than a hundred isolated endpoint tests that nobody remembers to run.
DevTools is built for this: visual Flow builder, HAR import, YAML export, CI runner. Try it at dev.tools.