Overview
Post-run Checks let you add workflow-level verification that runs after the main automation steps finish. They are useful when “the workflow clicked through the UI” is not enough and you also want Cyberdesk to confirm that the run actually produced:- the file you expected
- the screenshot you expected
- the structured output you expected
- the semantic result you expected
Think of Post-run Checks as a confidence-boosting double-check, not as a replacement for good workflow instructions. The workflow still does the work. Post-run Checks verify the outcome.
Why They Exist
In many workflows, completion is not the same thing as correctness. Examples:- A report download workflow may finish, but the PDF might never have been exported.
- A form-submission workflow may end on a confirmation page, but you may want a saved screenshot as proof.
- A data-extraction workflow may produce
output_data, but you may still want to verify that the contents are complete and sensible.
Where They Live
Post-run Checks are defined on the workflow. That means:- you configure them once on the workflow
- every new run of that workflow can inherit them
- the run stores a snapshot of the effective checks it started with
Changing a workflow later does not rewrite historical run results. Existing runs keep the post-run check snapshot they started with.
When They Run
The lifecycle is:- The main workflow execution runs first.
- If the run has eligible Post-run Checks, the run enters
running_checks, which appears in the UI as Running Checks. - Cyberdesk executes the checks after the main run finishes.
- The final terminal status is decided only after those checks complete.
run_complete and other terminal-completion semantics now mean: the main execution finished, and any Post-run Checks finished too.Standalone Runs vs Sessions and Chains
Cyberdesk handles machine/session ownership differently depending on the run shape:- Standalone runs usually release the machine before Post-run Checks begin.
- Session and chain runs can keep the machine/session claimed until Post-run Checks finish.
What Post-run Checks Are Not
Post-run Checks are not a complete definition of workflow success. If you need to define a requirement where, if a certain condition isn’t met, an agent should actively try to fulfill it, define the successful action in your prompt and use a focused action to verify it, possibly setting a runtime value boolean (and output schema) to indicate success. Learn more atfocused_action and Generating Output Data.
Our team is working on a complete form of success criteria, which will include during and post-run checks.
The Four Initial Check Types
Cyberdesk currently supports four initial Post-run Check types.1. Attachment Exists
This check verifies that one or more expected run attachments exist. Use it for:- exported PDFs
- CSVs
- generated files
- saved screenshots
- one-file-per-item loop workflows
2. Image Check
This check asks an AI model to inspect one or more image attachments against a natural-language rule. Use it for:- confirmation screenshots
- chart screenshots
- receipts
- dashboards
- visual QA
- “Verify the screenshot shows a successful payment confirmation and a visible confirmation number.”
- “Verify the saved chart shows the last 30 days and includes all four vital signs.”
3. Output Data Passes Schema Validation
This is the auto-managed Post-run Check. If your workflow has an output schema, Cyberdesk automatically keeps this check in sync. You do not manually author it as a separate custom check. It verifies that the final structuredoutput_data conforms to the workflow’s output schema.
This is the best way to ensure:
- required fields exist
- data shape is valid
- types line up with your schema
4. Output Data Check
This check asks an AI model to evaluate the final structuredoutput_data against a natural-language rule.
Use it when schema validation alone is not enough.
Examples:
- “Verify that every extracted invoice line item has a positive amount.”
- “Verify the structured output clearly identifies a patient MRN and appointment date.”
- “Verify the extracted order looks complete and not partially parsed.”
Output-data agentic checks are most useful when you care about semantic correctness, not just shape correctness.
Attachment Targeting Modes
The two attachment-based checks, Attachment Exists and Image Check, support three targeting modes.Exact Filenames
Use this when you know the exact attachment names ahead of time. Examples:invoice.pdfconfirmation.pngdaily_report.csv
- stable filenames
- deterministic exports
- named screenshots
Exact Mode Behavior
- You provide one or more filenames.
- Filenames should include file extensions.
- At runtime, Cyberdesk looks for those exact attachment filenames.
- If a required attachment is missing, the check fails.
In the workflow editor, Cyberdesk may suggest exact filenames by scanning your prompt for known attachment-producing patterns such as
save_screenshot_as_run_attachment and mark_file_for_export. These are suggestions only. You can always add or edit filenames manually.Regex
Use this when the filename changes between runs, but follows a consistent pattern. Examples:^invoice_.*\.pdf$^receipt_[0-9]{8}\.png$^export_.*\.csv$
- timestamps
- generated IDs
- date-based filenames
- dynamic naming conventions
Regex Mode Behavior
- Matching is done against the attachment filename.
- Matching is full-string and case-sensitive.
- Zero matches fail the check.
- You can optionally specify an expected match count.
- a literal integer
- a variable/reference that resolves to an integer
- a variable/reference that resolves to an array, in which case Cyberdesk uses the array length
Loop Items
Use this when the workflow should produce one attachment per loop item. Examples:receipt_{{loop_item.name}}.pngclaim_{{loop_item.claim_id}}.pdfsummary_{{loop_item}}.csv
- one-file-per-item workflows
- iterating over arrays
- exporting a file for each selected record
Loop Items Mode Behavior
You provide:loop_inputloop_item_filename_template
loop_input semantics match the looping system:
- a JSON array means “iterate over this array”
- an integer
nmeans “iterate over items0..n-1”
{{loop_item}} in Looping Tools.
Producing Attachments That Checks Can Validate
Post-run Checks work on run attachments, not arbitrary machine state. That means your workflow should intentionally produce the attachments you want to verify.For screenshots
Usesave_screenshot_as_run_attachment.
Good for:
- confirmation pages
- receipts
- dashboards
- visual evidence
For files created or downloaded on the machine
Usemark_file_for_export.
Good for:
- PDFs
- CSVs
- generated documents
- downloaded exports
Writing Good Check Prompts
The AI-based checks are:- Image Check
- Output Data Check
Good prompt characteristics
- State the pass condition clearly.
- Mention the evidence that matters.
- Say what should count as failure.
- Avoid asking the model to judge unrelated parts of the workflow.
Better Image Check Prompt
Better Output Data Check Prompt
Weaker Prompts to Avoid
How Statuses Work
When Post-run Checks exist, the run may enter Running Checks after the main workflow steps are done. That means:- the automation steps may be finished
- the run is not terminal yet
- Cyberdesk is still verifying the outcome
Final status behavior
If the main execution succeeded:- all checks pass → final run status is
success - one or more checks fail → final run ends on a failure path
- a check hits an infrastructure-level problem → final run ends on
error
error or task_failed:
- Cyberdesk can still record Post-run Check results for observability
- the final run status stays on that failure path
- cancelled before checks start → checks are skipped
- cancelled during Running Checks → unfinished checks are cancelled
Accessing Results in Code
Post-run Check results live onrun.post_run_checks.
You can access them from:
get_runrun_completewebhook payloadslist_runswhen you requestfields=post_run_checks
run.post_run_checksis an array, not an object keyed by check name- each item includes
name,status,error_message,messages, andmatched_filenames - if you want to look checks up by name, keep those names stable and unique within the workflow
- the best place to react automatically is the
run_completewebhook, because it fires only after Post-run Checks finish
- TypeScript
- Python
If you are polling runs via
list_runs, remember to request post_run_checks explicitly in the fields list. get_run and run_complete already include it.Example End-to-End Pattern
Imagine a workflow that:- downloads an invoice PDF
- saves a confirmation screenshot
- extracts structured invoice data
- Attachment Exists
- exact filename:
invoice.pdf
- exact filename:
- Image Check
- exact filename:
invoice_confirmation.png - prompt: “Verify the screenshot shows a successful invoice export with no visible errors.”
- exact filename:
- Output Data Passes Schema Validation
- auto-managed because the workflow has an output schema
- Output Data Check
- prompt: “Verify the structured output contains invoice number, invoice date, vendor name, and a positive total.”
- deterministic checks on concrete artifacts
- semantic checks on the final output
Best Practices
- Prefer exact filenames when names are stable.
- Use regex only when the variability is real and predictable.
- Use loop_items when the workflow intentionally produces one attachment per item.
- Pair schema validation with output data checks when you care about both shape and meaning.
- Keep AI check prompts narrow and specific.
- Treat Post-run Checks as verification, not as a substitute for good workflow instructions.
- Review check results in run details when tuning a workflow.
Related Docs
-
Save Screenshot as Run Attachment
Learn how to create screenshot attachments that image checks can evaluate. -
Mark File for Export
Learn how to export files from the machine as run attachments. -
Looping Tools
Learn howstart_loop,{{loop_item}}, and loop inputs work. -
Generating Output Data
Learn how output schemas andoutput_dataare produced and validated. -
Declare Task Succeeded
Learn how early success detection during main execution differs from post-run verification.