At its core, a session is a reservation of a single machine. While a session is active, that machine is dedicated to your session only — no unrelated runs will be scheduled onto it. This guarantees your multi-step automations run back-to-back on the same desktop without interference.
What you get from a session
- Exclusive access to one machine for the session’s duration (strong scheduling guarantee)
- Deterministic sequencing: “step 1 → step 2 → …” behavior with no opportunistic interleaving
- Shared state: Files and desktop state persist across runs in the same session
When to use sessions
Sessions are essential when your automation requires multiple steps that must happen on the same machine without interruption:
- EHR workflows: Log into Epic, navigate to a specific patient, extract their data, then upload documents to their chart — all with no interruptions from other runs
- Financial reporting: Export monthly reports from your ERP system, transform the data in Excel, then re-import the processed results
- Document processing: Download files from a web portal, process them with a local application, then upload the results back
- Any multi-step workflow: Where state on the desktop (open applications, logged-in sessions, temporary files) must persist between steps
Chains: The easiest way to use sessions
Chains are a convenient way to create multiple runs that execute back-to-back in the same session. Instead of manually creating individual runs and managing their sequencing, you can define all your workflow steps upfront and let Cyberdesk handle the session management and execution order.
Start a new session with a chain
import { createCyberdeskClient, type WorkflowChainCreate } from 'cyberdesk'
const client = createCyberdeskClient(process.env.CYBERDESK_API_KEY!)
const chain: WorkflowChainCreate = {
// Optional shared inputs applied to steps whose workflows declare those variables
shared_inputs: {
search_query: 'red panda facts'
},
// Optional shared sensitive inputs available to all steps
shared_sensitive_inputs: {
api_key: 'shared-secret-key'
},
// Attach files once at the beginning of the chain (applied to the first run)
shared_file_inputs: [
// { filename: 'seed.txt', content: 'base64-...' }
],
// Reserve a machine for the whole chain (either machine_id OR pool_ids)
pool_ids: ['pool-with-chrome', 'customer-a'],
keep_session_after_completion: false,
steps: [
{
workflow_id: 'step-1-workflow-id',
session_alias: 'step1',
inputs: {
topic: 'red panda',
},
sensitive_inputs: {
username: 'user1',
password: 'secret123'
}
},
{
workflow_id: 'step-2-workflow-id',
session_alias: 'step2',
inputs: {
// Use output of step1 as an input to step2
search_query: { $ref: 'step1.outputs.result' }
},
sensitive_inputs: {
security_token: 'step2-token'
}
}
]
}
const { data: chainResult, error } = await client.runs.chain(chain)
if (error) throw new Error(String(error))
console.log('Session:', chainResult.session_id)
console.log('Run IDs:', chainResult.run_ids)
from cyberdesk import CyberdeskClient, WorkflowChainCreate
import os
client = CyberdeskClient(os.environ['CYBERDESK_API_KEY'])
chain = WorkflowChainCreate(
# Optional shared inputs applied to steps whose workflows declare those variables
shared_inputs={
"search_query": "red panda facts"
},
# Optional shared sensitive inputs available to all steps
shared_sensitive_inputs={
"api_key": "shared-secret-key"
},
# Filter machines by pools; or use machine_id to target one machine
pool_ids=["pool-with-chrome", "customer-a"],
keep_session_after_completion=False,
steps=[
{
"workflow_id": "step-1-workflow-id",
"session_alias": "step1",
"inputs": {
"topic": "red panda"
},
# Step-specific sensitive inputs
"sensitive_inputs": {
"username": "user1",
"password": "secret123"
}
},
{
"workflow_id": "step-2-workflow-id",
"session_alias": "step2",
"inputs": {
# Use output of step1 as an input to step2
"search_query": {"$ref": "step1.outputs.result"}
},
"sensitive_inputs": {
"security_token": "step2-token"
}
}
]
)
resp = client.runs.chain_sync(chain)
print("Session:", resp.data.session_id)
print("Run IDs:", resp.data.run_ids)
Key points:
- Provide
machine_id to target a specific machine, or pool_ids to match any machine in all specified pools (intersection)
- The chain always runs on one reserved session. If you omit
session_id, the API creates one and reserves a machine before step 1 starts
shared_inputs are automatically filtered per workflow so each step only receives the variables it actually declares
shared_sensitive_inputs are available to all steps, while sensitive_inputs in individual steps provide step-specific sensitive values
shared_file_inputs are attached to the first run in the chain
Passing data between steps with refs
Once you have multiple workflows running in the same session, you’ll often want to pass outputs from earlier steps as inputs to later ones. Refs make this seamless — simply reference a previous step’s output:
{ "$ref": "step1.outputs.result" }
{"$ref": "step1.outputs.result"}
The path on the right points to a prior step’s output field. Refs are resolved server-side within a session, so you don’t need to manually poll and extract values.
Join an existing session
If you already have a reserved session (e.g., created by a prior chain), you can reuse it:
const { data: chainResult } = await client.runs.chain({
session_id: 'existing-session-uuid',
steps: [
{ workflow_id: 'wf-a', session_alias: 'warmup' },
{ workflow_id: 'wf-b', session_alias: 'extract', inputs: { query: 'current patient' } },
]
})
chain = WorkflowChainCreate(
session_id="existing-session-uuid",
steps=[
{"workflow_id": "wf-a", "session_alias": "warmup"},
{"workflow_id": "wf-b", "session_alias": "extract", "inputs": {"query": "current patient"}},
]
)
client.runs.chain_sync(chain)
This keeps the same reserved machine and any state/files already present on it. machine_id/pool_ids are ignored when session_id is provided.
Keep the session alive after the chain
If you want to leave the reservation active for a follow-up chain or ad-hoc steps:
await client.runs.chain({
pool_ids: ['customer-a'],
keep_session_after_completion: true,
steps: [ /* ... */ ]
})
client.runs.chain_sync(WorkflowChainCreate(
pool_ids=["customer-a"],
keep_session_after_completion=True,
steps=[ ... ]
))
Later, you can start a new chain with that session_id to continue from where you left off.
Ad-hoc sessions without a chain
You don’t have to use a chain to benefit from sessions. You can start a session with a single run and then submit additional runs that reference the same session_id. This is ideal when downstream steps depend on external conditions or when you want to decide at runtime which workflow to run next.
// 1) Start a brand new session using a normal run
const { data: warmup } = await client.runs.create({
workflow_id: 'login-workflow-id',
pool_ids: ['customer-a'],
start_session: true, // Reserve a machine and begin a session
input_values: { username: 'alice' }
})
// Get the session to reuse and the reserved machine
const sessionId = warmup.session_id!
// 2) Run the next workflow in the same session (no other runs will interleave)
const { data: step2 } = await client.runs.create({
workflow_id: 'search-workflow-id',
session_id: sessionId, // Guarantees same machine & back-to-back scheduling
input_values: {
query: { $ref: 'step1.outputs.result' }
}
})
// 3) Final run that releases the session when complete
const { data: final } = await client.runs.create({
workflow_id: 'cleanup-workflow-id',
session_id: sessionId,
release_session_after: true, // Release the session after this run completes
input_values: { cleanup: 'true' }
})
# 1) Start a session and warm up the desktop
warmup = client.runs.create_sync(RunCreate(
workflow_id='login-workflow-id',
pool_ids=['customer-a'],
start_session=True,
input_values={'username': 'alice'}
)).data
session_id = warmup.session_id
# 2) Add another run in the same session — scheduling remains exclusive
client.runs.create_sync(RunCreate(
workflow_id='search-workflow-id',
session_id=session_id,
input_values={'query': 'recent orders'}
))
# 3) Final run that releases the session when complete
client.runs.create_sync(RunCreate(
workflow_id='cleanup-workflow-id',
session_id=session_id,
release_session_after=True, # Release the session after this run completes
input_values={'cleanup': 'true'}
))
Automatic session release
When creating individual runs in a session (not using chains), you can use release_session_after: true to automatically release the session when that run completes (regardless of success or failure):
// This run will release the session after it completes
const { data: finalRun } = await client.runs.create({
workflow_id: 'final-workflow-id',
session_id: existingSessionId,
release_session_after: true,
input_values: { finalize: 'true' }
})
# This run will release the session after it completes
final_run = client.runs.create_sync(RunCreate(
workflow_id='final-workflow-id',
session_id=existing_session_id,
release_session_after=True,
input_values={'finalize': 'true'}
))
This is useful as a convenience, so you don’t have to decouple creating a session-ending run and actually ending the session.
The session is released when the run completes, whether it succeeds, fails, or is cancelled. This ensures the session doesn’t remain locked if something goes wrong.
Polling chain runs
The chain API returns run_ids in creation order. You can poll them individually, or receive a webhook when any of those runs complete:
const { data: chainRes } = await client.runs.chain(chain)
for (const runId of chainRes.run_ids) {
const run = await waitForRunCompletion(client, runId)
console.log(run.status, run.output_data)
}
chain = client.runs.chain_sync(...).data
for run_id in chain.run_ids:
completed = wait_for_run_completion_sync(client, run_id, 600)
print(completed.status, completed.output_data)
Real-world patterns
Login + Work (Exclusive)
Reserve a session, log into a thick client once, then run 5 workflows in sequence. No other jobs will touch that machine mid-sequence.
Search + Process with Refs
Step 1 finds a record; Step 2 uses {$ref: 'step1.outputs.id'} to open/process; Step 3 posts results. All on the same desktop.
Files created by Step 1 are visible to Steps 2/3 because the session keeps the same working directory.
Machine targeting
If you provide a machine_id when creating a chain or run, pool_ids are ignored. Cyberdesk will only attempt the specified machine; if it’s busy or unavailable, the run will wait until that machine is free (no fallback to other machines or pools).
Best practice: Use pool_ids for flexibility — Cyberdesk will pick any available machine that matches all specified pools. Use machine_id only when you specifically need a particular machine (e.g., it has unique software or state).
Next steps