YAML-Native API Testing: Define, Version, and Run Tests as Code
YAML-native API testing means your test definitions are declarative YAML files, not scripts hidden in a GUI or code buried in a test framework. Define requests, chain variables, assert contracts, and run everything in CI, all from files you can review in a pull request.
Most API testing tools store test definitions in a proprietary format: Postman uses collection JSON with embedded JavaScript, Bruno uses a custom .bru DSL, and coded frameworks like RestAssured or SuperTest couple tests to a specific programming language. All of these work. None of them are natively reviewable in a pull request, diffable in Git, or runnable without their specific runtime.
YAML-native API testing is a different approach: your tests are plain YAML files. Requests, variables, assertions, and execution order are all declared in a format that every engineer already knows how to read, every editor already highlights, and every CI system already handles. The test file is the test, no compilation or export step required.
This guide covers the full approach: what YAML-native testing is, why it matters, the syntax patterns, variable passing, assertions, environment management, and how YAML tests integrate with Git and CI workflows.
API Assertions in YAML: Status, JSON Paths, Headers, Timing Thresholds
Four high-signal assertion categories for YAML API tests: status codes, JSONPath checks, headers, and timing budgets.
Chrome Web Developer Tools: Record Requests for Replayable Tests
Capture real browser traffic and convert it into deterministic, Git-reviewed YAML flows with request chaining.
HAR to YAML API Test Flow: Auto-Extract Variables + Chain Requests
Auto-map tokens and IDs from HAR captures into explicit YAML variable references and request chains.
What is YAML-native API testing?
YAML-native API testing means your API test definitions are stored, versioned, reviewed, and executed as plain YAML files. The YAML file is the single source of truth for what the test does. There is no separate UI state, no exported collection, and no compiled artifact.
A YAML test file defines:
- Requests with method, URL, headers, and body
- Variables that flow between steps via explicit references
- Assertions via JavaScript nodes that validate contracts
- Execution order via
depends_ondeclarations - Control flow via conditions, loops, and for-each iteration
The result is a test that reads like a specification. A reviewer can open the YAML file and understand the entire workflow without running anything or opening a GUI.
How it compares to other approaches
| Approach | Format | PR reviewable | CI-native | Editor support |
|---|---|---|---|---|
| Postman + Newman | Collection JSON + JS scripts | Noisy diffs | Via Newman CLI | Postman UI |
| Bruno | Custom .bru DSL | Readable, but custom syntax | Via Bruno CLI | Bruno app |
| Coded (RestAssured, SuperTest) | Java / JS / Python | Code review, but verbose | Native | Any IDE |
| YAML-native (DevTools) | Plain YAML | Clean, line-oriented diffs | CLI + any CI | Any editor + visual Studio |
Why YAML over code or GUI
The argument for YAML is not that it is better at expressing logic than Python or JavaScript. It is that API tests are mostly declarations, not logic. A test says: send this request with these headers, expect this status, check these fields, then use this value in the next request. That is a data structure, not an algorithm.
Declarations are easier to review
A YAML file that defines a request with headers, body, and expected status is immediately readable. A Java method that builds an HTTP client, sets headers, sends a request, extracts a response, and asserts on fields buries the same information in boilerplate. In a pull request, the YAML diff shows exactly what changed. The code diff requires context.
One format for human and machine
YAML files can be generated from browser traffic (HAR imports), edited by hand, modified by a visual editor, linted by CI, and executed by a CLI runner. The same file serves all these roles because YAML is a data format, not a program. You do not need a build step, a compilation target, or a UI session to understand what the test does.
Logic lives in JS nodes, not everywhere
When you do need logic (conditional assertions, value transformations, complex validation), it goes in explicit js: nodes. This keeps the declarative parts clean and the imperative parts contained. A reviewer knows that the request definition is data and the JS node is logic. In coded frameworks, both are interleaved.
Anatomy of a YAML test file
A YAML test file in DevTools has a consistent structure. Here is a minimal example that authenticates, creates a resource, and verifies it:
workspace_name: Bookmarks API
env:
BASE_URL: '{{BASE_URL}}'
run:
- flow: BookmarkTest
flows:
- name: BookmarkTest
variables:
- name: run_id
value: 'ci-001'
steps:
- request:
name: Login
method: POST
url: '{{BASE_URL}}/auth/login'
headers:
Content-Type: application/json
body:
email: 'test@example.com'
password: 'password123'
- js:
name: Auth
code: |
export default function(ctx) {
if (ctx.Login?.response?.status !== 200) throw new Error("Login failed");
return { token: ctx.Login.response.body.access_token };
}
depends_on: Login
- request:
name: CreateBookmark
method: POST
url: '{{BASE_URL}}/api/bookmarks'
headers:
Authorization: 'Bearer {{Auth.token}}'
Content-Type: application/json
body:
url: 'https://example.com/test-{{run_id}}'
title: 'Test Bookmark'
depends_on: Auth
- js:
name: ValidateCreate
code: |
export default function(ctx) {
const resp = ctx.CreateBookmark?.response;
if (resp?.status !== 201) throw new Error("Expected 201");
if (!resp?.body?.id) throw new Error("Missing ID");
return { id: resp.body.id };
}
depends_on: CreateBookmark
- request:
name: GetBookmark
method: GET
url: '{{BASE_URL}}/api/bookmarks/{{ValidateCreate.id}}'
headers:
Authorization: 'Bearer {{Auth.token}}'
depends_on: ValidateCreate
- js:
name: VerifyRead
code: |
export default function(ctx) {
if (ctx.GetBookmark?.response?.status !== 200) throw new Error("Expected 200");
if (ctx.GetBookmark?.response?.body?.title !== "Test Bookmark") throw new Error("Mismatch");
return { verified: true };
}
depends_on: GetBookmarkKey structural elements
| Element | Purpose | Example |
|---|---|---|
workspace_name | Labels the file for reporting | Bookmarks API |
env | Environment variable declarations | BASE_URL: '{{BASE_URL}}' |
run | Which flows to execute and in what order | - flow: BookmarkTest |
flows[].steps | The sequence of requests, JS nodes, conditions, loops | - request:, - js:, - if: |
depends_on | Explicit execution order between steps | depends_on: Login |
variables | Flow-scoped variables for test data | - name: run_id\n value: 'ci-001' |
Step types
YAML flows support five step types, each a first-class node:
request:— HTTP request with method, URL, headers, bodyjs:— JavaScript node for validation, extraction, or transformationif:— Conditional branching based on a previous step's outputfor:— Loop a step N times (polling, retries)for_each:— Iterate over a list of items
Variable passing and references
Variable passing is what turns a list of independent requests into a connected test flow. In YAML-native testing, variables flow between steps through explicit references that are visible in the file.
Reference syntax
There are three types of references in YAML flows:
- Request output:
{{NodeName.response.body.field}}— references a field from a previous request's response - JS node output:
{{JsNodeName.field}}— references a value returned by a JS node - Environment variable:
{{#env:VAR_NAME}}— reads from OS environment at runtime
# Direct request output reference
- request:
name: GetProfile
url: '{{BASE_URL}}/api/me'
headers:
Authorization: 'Bearer {{Login.response.body.access_token}}'
depends_on: Login
# JS node extraction + downstream reference
- js:
name: Auth
code: |
export default function(ctx) {
return { token: ctx.Login.response.body.access_token };
}
depends_on: Login
- request:
name: CreateItem
url: '{{BASE_URL}}/api/items'
headers:
Authorization: 'Bearer {{Auth.token}}'
depends_on: AuthBoth patterns work. Direct references ({{Login.response.body.access_token}}) are simpler. JS extraction nodes ({{Auth.token}}) are better when you need to validate the value before passing it downstream or when the reference path is deeply nested.
Auto-mapping from HAR imports
When you import a HAR file into DevTools, the tool auto-detects values that appear in one response and reappear in a subsequent request (tokens, IDs, CSRF values). It converts these into explicit {{NodeName.response.body.field}} references. You review and adjust the mapping rather than wiring everything by hand.
For the full HAR-to-YAML pipeline, see: HAR to YAML API Test Flow: Auto-Extract Variables + Chain Requests.
Assertions in YAML
Assertions in YAML flows are written as js: nodes that throw on failure. This keeps assertions explicit, reviewable, and flexible, you can check anything JavaScript can express.
Four assertion categories
Most API assertions fall into four categories, each with different determinism trade-offs:
| Category | What to check | Determinism |
|---|---|---|
| Status code | Exact code or set of allowed codes | High |
| JSON body | Field presence, types, stable values | High if you avoid timestamps and UUIDs |
| Headers | Content-Type, Cache-Control, Location | High |
| Timing | Response time budget (ceiling, not benchmark) | Medium, needs headroom for CI variance |
Assertion examples
# Status + body contract
- js:
name: ValidateWidget
code: |
export default function(ctx) {
const resp = ctx.GetWidget?.response;
if (resp?.status !== 200) throw new Error("Expected 200, got " + resp?.status);
if (!resp?.body?.id) throw new Error("Missing id");
if (typeof resp?.body?.name !== 'string') throw new Error("name not a string");
if (!resp?.body?.tags?.includes("public")) throw new Error("Missing 'public' tag");
return { validated: true };
}
depends_on: GetWidget
# Header assertion
- js:
name: ValidateHeaders
code: |
export default function(ctx) {
const ct = ctx.DownloadPdf?.response?.headers?.["content-type"];
if (!/^application\/pdf/.test(ct)) throw new Error("Bad content-type: " + ct);
return { validated: true };
}
depends_on: DownloadPdf
# Timing budget
- js:
name: CheckTiming
code: |
export default function(ctx) {
const ms = ctx.Search?.response?.duration;
if (ms > 500) throw new Error("Too slow: " + ms + "ms, budget 500ms");
return { duration: ms };
}
depends_on: SearchFor a comprehensive treatment of assertion patterns, see: API Assertions in YAML: Status, JSON Paths, Headers, Timing Thresholds.
Environments and secrets
YAML test files need to run against different environments (local, staging, production) without hardcoding URLs or credentials. The pattern is environment references that resolve at runtime.
Environment variables in YAML
# Top-level env block maps flow variables to OS environment
env:
BASE_URL: '{{BASE_URL}}'
# Flow-level variables for test data
flows:
- name: MyFlow
variables:
- name: api_key
value: '{{#env:SECRET_API_KEY}}'
- name: run_id
value: 'ci-20260214'
- name: test_email
value: 'test@example.com'The {{#env:VAR_NAME}} syntax reads from OS environment variables at runtime. In CI, inject secrets via your platform's secret store:
# GitHub Actions — inject secrets as env vars
- name: Run API tests
run: devtools flow run tests/*.yaml --report junit
env:
BASE_URL: https://staging.example.com
SECRET_API_KEY: ${{ secrets.API_KEY }}What to never commit
- API keys, tokens, passwords (use
#env:references) - Raw HAR files (they contain session cookies and tokens)
- Environment files with real credentials (
.envshould be gitignored)
What to commit: the YAML flow files, environment templates with placeholder values, and CI workflow definitions.
From browser traffic to YAML test
One of the strongest advantages of YAML-native testing is the HAR-to-YAML pipeline: capture a real user workflow in the browser, convert it to a YAML test, then refine it into a deterministic, reviewable flow.
The pipeline
- Record a focused workflow in Chrome DevTools Network panel (login, create, verify)
- Export the capture as a HAR file ("Save all as HAR with content")
- Import the HAR into DevTools Studio, which generates a YAML flow
- Refine: delete noise (preflights, analytics), replace secrets with env refs, review auto-mapped variables
- Commit the YAML file to Git for PR review and CI execution
What gets auto-mapped
When DevTools imports a HAR, it detects values that appear in one response and reappear in a subsequent request. These become variable references:
- Auth tokens from login responses →
Authorizationheaders - Resource IDs from create responses → URL path parameters
- CSRF tokens from bootstrap responses → mutation headers
- Pagination cursors from list responses → next-page query params
For the full browser-to-test workflow, see: Chrome Web Developer Tools: Record Requests for Replayable Tests.
Request chaining patterns
Request chaining is what makes YAML tests useful for real workflows. Every multi-step test relies on data flowing between requests. Here are the patterns that scale.
Pattern 1: Login → use token
- request:
name: Login
method: POST
url: '{{BASE_URL}}/auth/login'
headers:
Content-Type: application/json
body:
email: '{{USERNAME}}'
password: '{{PASSWORD}}'
- request:
name: ListProjects
method: GET
url: '{{BASE_URL}}/api/projects'
headers:
Authorization: 'Bearer {{Login.response.body.access_token}}'
depends_on: LoginPattern 2: Create → capture ID → fetch
- request:
name: CreateWidget
method: POST
url: '{{BASE_URL}}/api/widgets'
headers:
Authorization: 'Bearer {{Auth.token}}'
Content-Type: application/json
body:
name: 'ci-{{run_id}}'
depends_on: Auth
- request:
name: GetWidget
method: GET
url: '{{BASE_URL}}/api/widgets/{{CreateWidget.response.body.id}}'
headers:
Authorization: 'Bearer {{Auth.token}}'
depends_on: CreateWidgetPattern 3: Conditional handling
# Handle idempotent create (201 vs 409)
- if:
name: CheckExists
condition: CreateItem.response.status == 409
then: GetExisting
else: UseCreated
depends_on: CreateItemPattern 4: Polling with a loop
# Poll a status endpoint until a job completes
- for:
name: PollStatus
iter_count: 10
loop: CheckJobStatus
depends_on: StartJobThe key principle: every dependency is visible in the YAML via depends_on and variable references. A reviewer can trace the data flow by reading the file.
Git workflows for YAML tests
YAML tests are code. They belong in Git, get reviewed in PRs, and run in CI. Here is how to structure the workflow.
Repository layout
tests/
api/
auth-flow.yaml
crud-lifecycle.yaml
search-pagination.yaml
env/
staging.env.template # BASE_URL, placeholder credentials
production.env.template
.github/
workflows/
api-tests.yaml # CI workflowPR workflow
- Branch: create or modify a YAML test file
- Review: the diff shows exactly what changed — new request, updated assertion, modified variable reference
- CI check: the test runs automatically on the PR and reports pass/fail
- Merge: the test becomes part of the main suite
Keeping diffs clean
YAML tests are only reviewable if the diffs are meaningful. Conventions that help:
- Stable step names: treat
name:values as a public API — do not rename without reason - Consistent key ordering: always
name,method,url,headers,body,depends_on - Sorted headers: alphabetical header keys prevent reorder-only diffs
- Quote ambiguous values:
'on','yes','3.0'to prevent YAML type coercion surprises - Block scalars for large JSON bodies so diffs are line-oriented
CI integration
name: API Tests
on:
pull_request:
branches: [main]
jobs:
api-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install DevTools CLI
run: curl -fsSL https://dev.tools/install.sh | sh
- name: Run YAML flows
run: devtools flow run tests/api/*.yaml --report junit
env:
BASE_URL: ${{ vars.STAGING_URL }}
SECRET_API_KEY: ${{ secrets.API_KEY }}
- name: Upload results
if: always()
uses: actions/upload-artifact@v4
with:
name: api-test-results
path: reports/FAQ
How is YAML-native testing different from Postman or Bruno?
Postman stores tests in collection JSON with embedded JavaScript. Bruno uses a custom .bru format. YAML-native tests are plain YAML files that any editor, linter, or CI system already understands. The key difference is reviewability: YAML diffs are clean and meaningful in pull requests.
Do I need to write YAML by hand?
No. You can build flows visually in DevTools Studio, import a HAR file from browser traffic, or write YAML directly. The visual editor and YAML are two views of the same test. Edit either one.
Can YAML tests handle complex logic like retries and conditions?
Yes. YAML flows support if conditions, for loops, and for_each iteration as first-class step types. JS nodes handle any logic that requires computation. The declarative structure keeps the flow readable while JS nodes handle the edge cases.
How do I manage secrets in YAML test files?
Never commit secrets. Use environment references ({{#env:VAR_NAME}}) in your YAML to read from OS environment variables at runtime. In CI, inject secrets via your platform secret store (GitHub Actions secrets, GitLab CI variables, etc.).
What if my team already uses Postman collections?
You can migrate incrementally. Start by building new tests as YAML flows and running them alongside your existing Postman suite. DevTools provides a migration guide for converting collections to YAML flows.
Start defining API tests as YAML
YAML-native API testing is not a framework to learn. It is a format you already know applied to a problem you already have. Your tests become reviewable files in Git, not opaque exports from a GUI or boilerplate-heavy code in a test framework.
Start with one workflow: record it in the browser, convert to YAML, add assertions, and run it in CI. The test file is the test. Review it like code, version it like code, run it like code.
DevTools is built for this workflow: visual editor, HAR import, YAML export, CLI runner. Try it at dev.tools.