Reserve machines for multi-step workflows with exclusive access
At its core, a session is a reservation of a single machine. While a session is active, that machine is dedicated to your session only — no unrelated runs will be scheduled onto it. This guarantees your multi-step automations run back-to-back on the same desktop without interference.
Sessions are essential when your automation requires multiple steps that must happen on the same machine without interruption:
EHR workflows: Log into Epic, navigate to a specific patient, extract their data, then upload documents to their chart — all with no interruptions from other runs
Financial reporting: Export monthly reports from your ERP system, transform the data in Excel, then re-import the processed results
Document processing: Download files from a web portal, process them with a local application, then upload the results back
Any multi-step workflow: Where state on the desktop (open applications, logged-in sessions, temporary files) must persist between steps
Chains are a convenient way to create multiple runs that execute back-to-back in the same session. Instead of manually creating individual runs and managing their sequencing, you can define all your workflow steps upfront and let Cyberdesk handle the session management and execution order.
Once you have multiple workflows running in the same session, you’ll often want to pass outputs from earlier steps as inputs to later ones. Refs make this seamless — simply reference a previous step’s output:
TypeScript
Python
{ "$ref": "step1.outputs.result" }
{"$ref": "step1.outputs.result"}
The path on the right points to a prior step’s output field. Refs are resolved server-side within a session, so you don’t need to manually poll and extract values.
If downstream workflows define input_schema, Cyberdesk validates ref usage before execution:
For refs that target earlier steps in the same chain request, Cyberdesk checks compatibility against the producing step’s output_schema.
For refs that target runs that already exist in the session, Cyberdesk validates alias/path/type compatibility up front using available source metadata.
If referenced runs are queued/scheduling and don’t have output yet, creation is allowed as long as compatibility checks pass.
At execution time, ref resolution is strict: if an upstream optional field is missing in output_data, downstream required inputs will fail when the run executes.For full details on validation timing, error shapes, and $-prefixed sensitive keys, see Input Validation.
This keeps the same reserved machine and any state/files already present on it. For chain creation, provide either session_id or machine_id/pool_ids, not both.
You don’t have to use a chain to benefit from sessions. You can start a session with a single run and then submit additional runs that reference the same session_id. This is ideal when downstream steps depend on external conditions or when you want to decide at runtime which workflow to run next.
TypeScript
Python
// 1) Start a brand new session using a normal runconst { data: warmup } = await client.runs.create({ workflow_id: 'login-workflow-id', pool_ids: ['customer-a'], start_session: true, // Reserve a machine and begin a session session_alias: 'step1', // Required if later runs will use $ref to this run's outputs input_values: { username: 'alice' }})// Get the session to reuse and the reserved machineconst sessionId = warmup.session_id!// 2) Run the next workflow in the same session (no other runs will interleave)const { data: step2 } = await client.runs.create({ workflow_id: 'search-workflow-id', session_id: sessionId, // Guarantees same machine & back-to-back scheduling input_values: { query: { $ref: 'step1.outputs.result' } }})// 3) Final run that releases the session when completeconst { data: final } = await client.runs.create({ workflow_id: 'cleanup-workflow-id', session_id: sessionId, release_session_after: true, // Release the session after this run completes input_values: { cleanup: 'true' }})
# 1) Start a session and warm up the desktopwarmup = client.runs.create_sync(RunCreate( workflow_id='login-workflow-id', pool_ids=['customer-a'], start_session=True, session_alias='step1', # Required if later runs will use $ref to this run's outputs input_values={'username': 'alice'})).datasession_id = warmup.session_id# 2) Add another run in the same session — scheduling remains exclusiveclient.runs.create_sync(RunCreate( workflow_id='search-workflow-id', session_id=session_id, input_values={'query': 'recent orders'}))# 3) Final run that releases the session when completeclient.runs.create_sync(RunCreate( workflow_id='cleanup-workflow-id', session_id=session_id, release_session_after=True, # Release the session after this run completes input_values={'cleanup': 'true'}))
When creating individual runs in a session (not using chains), you can use release_session_after: true to automatically release the session when that run completes (regardless of success or failure):
TypeScript
Python
// This run will release the session after it completesconst { data: finalRun } = await client.runs.create({ workflow_id: 'final-workflow-id', session_id: existingSessionId, release_session_after: true, input_values: { finalize: 'true' }})
# This run will release the session after it completesfinal_run = client.runs.create_sync(RunCreate( workflow_id='final-workflow-id', session_id=existing_session_id, release_session_after=True, input_values={'finalize': 'true'}))
This is useful as a convenience, so you don’t have to decouple creating a session-ending run and actually ending the session.
The session is released when the run completes, whether it succeeds, fails, or is cancelled. This ensures the session doesn’t remain locked if something goes wrong.
The release_session_after field on a run indicates whether this run released the session. This is useful for webhook consumers who need to know when all runs in a session are complete.How it works:
When you explicitly set release_session_after: true on a run, that field is stored
When using chains with keep_session_after_completion: false (the default), the last run automatically gets release_session_after: true
If a run errors or is cancelled and causes the session to be released, release_session_after is set to true on that run
For webhook endpoint setup, signature verification, retries, and local testing, see Webhooks Quickstart. The examples below assume you have already verified a run_complete payload and want to treat a successful releasing run as the “session is done” signal.
If you want to wait for a session to finish successfully and then gather every run’s output_data, use this pattern:
run_complete is the wait signal
continue only when run.status === "success"
continue only when run.release_session_after === true
then list all runs in run.session_id and aggregate their outputs
import osfrom cyberdesk import CyberdeskClient, RunField, RunListSortModefrom openapi_client.cyberdesk_cloud_client.models.run_response import RunResponsefrom openapi_client.cyberdesk_cloud_client.types import Unsetclient = CyberdeskClient(os.environ["CYBERDESK_API_KEY"])def serialize_output_data(output_data: object): if isinstance(output_data, Unset): return None to_dict = getattr(output_data, "to_dict", None) if callable(to_dict): return to_dict() return output_dataasync def list_all_session_runs(session_id: str) -> list[RunResponse]: all_runs: list[RunResponse] = [] skip = 0 limit = 100 while True: response = await client.runs.list( session_id=session_id, skip=skip, limit=limit, sort_mode=RunListSortMode.CREATED_AT_DESC, fields=[RunField.OUTPUT_DATA], ) if response.error: raise RuntimeError(f"Failed to list session runs: {response.error}") page = response.data.items if response.data else [] all_runs.extend(page) if len(page) < limit: return all_runs skip += limitasync def on_run_complete(run: RunResponse): if run.status != "success": return if run.release_session_after is not True: return if not run.session_id: return session_runs = await list_all_session_runs(str(run.session_id)) all_run_output_data = [ { "run_id": str(session_run.id), "output_data": serialize_output_data(session_run.output_data), } for session_run in session_runs ] print({ "session_id": str(run.session_id), "all_run_output_data": all_run_output_data, })# After webhook verification:# run: RunResponse = cast(RunResponse, evt.run)# await on_run_complete(run)
Use release_session_after to trigger downstream processing only after all runs in a session are complete — for example, aggregating results, sending notifications, or kicking off the next stage of your pipeline. If you also want to handle failed or cancelled sessions, remove the run.status == "success" guard and branch on the final status instead.
If you provide a machine_id when creating a chain or run, pool_ids are ignored. Cyberdesk will only attempt the specified machine; if it’s busy or unavailable, the run will wait until that machine is free (no fallback to other machines or pools).
Best practice: Use pool_ids for flexibility — Cyberdesk will pick any available machine that matches all specified pools. Use machine_id only when you specifically need a particular machine (e.g., it has unique software or state).