Get up and running with Cyberdesk in under 5 minutes. This guide assumes you’ve already created workflows in the Cyberdesk Dashboard.
1
Install the SDK
Copy
Ask AI
npm install cyberdesk
2
Initialize the client and create a run
Copy
Ask AI
import { createCyberdeskClient } from 'cyberdesk';// Initialize the clientconst client = createCyberdeskClient('YOUR_API_KEY');// Create a run for your workflowconst { data: run } = await client.runs.create({ workflow_id: 'your-workflow-id', machine_id: 'your-machine-id', input_values: { patient_id: '12345', patient_first_name: 'John', patient_last_name: 'Doe' }});// Wait for the run to completelet status = run.status;while (status === 'scheduling' || status === 'running') { await new Promise(resolve => setTimeout(resolve, 5000)); // Wait 5 seconds const { data: updatedRun } = await client.runs.get(run.id); status = updatedRun.status;}// Get the output dataif (status === 'success') { console.log('Patient data:', updatedRun.output_data);} else { console.error('Run failed:', updatedRun.error?.join(', '));}
We recommend creating and managing workflows through the Cyberdesk Dashboard. The dashboard editor supports rich, multimodal prompts — you can add screenshots or UI snippets directly into your prompt to guide the agent. The SDK is optimized for executing runs against your existing workflows.
If your workflow prompt references sensitive variables using the {$variable} syntax (for example, {$password}), you can pass those values separately via sensitive_input_values.
Sensitive inputs are stored in a secure third‑party secret vault (Basis Theory) only for the duration of the run. They are not logged in Cyberdesk, and they are not sent to any LLMs. The values are only resolved at the last moment during actual computer actions (e.g., when typing). After the run completes, these sensitive values are deleted from the vault. On the dashboard, sensitive inputs are never displayed and will not be prefilled when repeating a run.
You can specify pool requirements when creating a run. This ensures your run is executed on a machine that belongs to ALL specified pools. This is especially useful for:
Running workflows on customer-specific machines
Requiring machines with specific software installed
Organizing machines by location or capability
Copy
Ask AI
// Get pool IDs (typically from your configuration or database)const customerPoolId = 'pool-uuid-1'; // e.g., "Customer A" poolconst excelPoolId = 'pool-uuid-2'; // e.g., "Has Excel" poolconst { data: run, error } = await client.runs.create({ workflow_id: 'workflow-uuid', // Machine must be in BOTH pools (intersection, not union) pool_ids: [customerPoolId, excelPoolId], input_values: { patient_id: '12345', patient_first_name: 'John', patient_last_name: 'Doe' }});if (error) { console.error('Failed to create run:', error);} else { console.log('Run created:', run.id); console.log('Will execute on machine in pools:', [customerPoolId, excelPoolId]);}
Pool Matching Logic: When you specify multiple pools, Cyberdesk will only select machines that belong to ALL specified pools (intersection). For example, if you specify ["Customer A", "Has Excel"], only machines that are in both pools will be considered.
If you provide a machine_id when creating a run, pool_ids are ignored. Cyberdesk will only attempt the specified machine; if it’s busy or unavailable, the run will wait until that machine is free (no fallback to other machines or pools).
Creating and Managing Pools: While you can manage pools via the SDK, we recommend using the Cyberdesk Dashboard for a more intuitive experience:
Navigate to any machine in the dashboard
Click on the machine to view its details
Add the machine to existing pools or create new pools
Assign multiple pools to organize machines by customer, capability, or location
Common pool strategies:
By Customer: “Customer A”, “Customer B”, etc.
By Software: “Has Excel”, “Has Chrome”, “Has Epic EHR”
By Environment: “Production”, “Staging”, “Development”
Return only selected fields using the fields option. This avoids large payloads (like run_message_history) and speeds up responses.
Base fields always included: id, workflow_id, machine_id, status, created_at.
Add more by passing the fields array.
Copy
Ask AI
// Minimal (base fields only)const { data: minimal } = await client.runs.list();// Include inputs onlyconst { data: inputsOnly } = await client.runs.list({ fields: ['input_values'] // or [RunField.input_values] if using enum from SDK});// Include a couple specific fieldsconst { data: some } = await client.runs.list({ fields: ['input_values', 'session_id']});// Include attachments but skip history for speedconst { data: noHistory } = await client.runs.list({ fields: ['input_attachment_ids', 'output_attachment_ids']});
Use retry when you want to re-run the exact same run id, clearing outputs and optionally providing fresh inputs/files.
Sensitive values: always re-send sensitive_input_values on retry; secrets are deleted after each run.
File inputs: if you set cleanup_imports_after_run, files are deleted from the remote machine after the run; include file_inputs again if you need a fresh copy or when no input attachments exist (providing file_inputs replaces prior input attachments).
Regular inputs: only send input_values if you want to change them; otherwise the previous ones are reused.
Copy
Ask AI
// Replace inputs/files/sensitive values as needed; keeps same run_idconst { data: retried, error } = await client.runs.retry('run-uuid', { // optional overrides input_values: { query: 'new query' }, sensitive_input_values: { password: process.env.APP_PASSWORD! }, file_inputs: [ // providing file_inputs replaces prior input attachments { filename: 'input.pdf', content: base64Pdf } ], // session controls (all optional) reuse_session: true, // default: keep existing session // session_id: 'existing-session-uuid', // release_session_after: true, // machine selection (optional) // machine_id: 'specific-machine-uuid', // pool_ids: ['pool-a', 'pool-b'], // used only when no machine_id is set});if (error) { // Active runs (scheduling/running) cannot be retried console.error('Failed to retry run:', error);}
Behavior:
Retry is allowed only for terminal runs: success, error, or cancelled.
Outputs, history, and output attachments are always cleared.
Prior input attachments are kept unless you provide file_inputs (then they are replaced).
If you provide sensitive_input_values, new secrets are created; otherwise sensitive aliases are cleared.
When a session_id is present and the session is busy, immediate assignment is skipped and the retried run queues.
When machine_id is provided, pool_ids are ignored.
Get a signed URL that triggers automatic download when accessed. Perfect for web applications where you want to provide download links to users.
Copy
Ask AI
// Get a download URL with custom expiration (default: 5 minutes)const { data } = await client.run_attachments.getDownloadUrl( 'attachment-uuid', 600 // 10 minutes);if (data) { console.log(`Download URL: ${data.url}`); console.log(`Expires in: ${data.expires_in} seconds`); // You can use this URL in your web app // For example, in a React component: // <a href={data.url} download>Download File</a>}
Here’s a full example of a workflow that processes a file.
Workflow Prompt: "Take the file at ~/CyberdeskTransfers/report.txt, add a summary to the end of it, and mark it for export."
Workflow Setting: includes_file_exports is set to true.
Copy
Ask AI
import { createCyberdeskClient, FileInput } from 'cyberdesk';import { promises as fs } from 'fs';async function main() { const client = createCyberdeskClient('YOUR_API_KEY'); // 1. Prepare and upload the input file const reportContent = "This is the initial report content."; const encodedContent = Buffer.from(reportContent).toString('base64'); const { data: run } = await client.runs.create({ workflow_id: "your-file-processing-workflow-id", file_inputs: [{ filename: "report.txt", content: encodedContent }] }); console.log(`Run started: ${run.id}`); // 2. Wait for the run to complete const completedRun = await waitForRunCompletion(client, run.id); console.log(`Run finished with status: ${completedRun.status}`); // 3. Find and download the output attachment if (completedRun.status === 'success') { const { data: outputAttachments } = await client.run_attachments.list({ run_id: completedRun.id, attachment_type: 'output' }); if (outputAttachments?.items?.length) { const processedReport = outputAttachments.items[0]; // Option 1: Get a download URL (for web apps) const { data: urlData } = await client.run_attachments.getDownloadUrl(processedReport.id); if (urlData) { console.log(`Download URL: ${urlData.url}`); console.log(`Valid for: ${urlData.expires_in} seconds`); } // Option 2: Download the processed file directly const { data: fileData } = await client.run_attachments.download(processedReport.id); if (fileData) { const processedContent = new TextDecoder().decode(fileData); console.log("\n--- Processed Report ---"); console.log(processedContent); console.log("------------------------"); } } else { console.log("No output files were generated."); } }}// Assuming waitForRunCompletion is defined as in the previous examplesmain();
This example demonstrates the complete lifecycle: uploading a file with a run, executing a workflow that modifies it, and then retrieving the processed file from the run’s output attachments.
When creating multiple runs in bulk, you can also specify pool requirements. All runs will be distributed across machines that match the pool criteria.
Copy
Ask AI
// Create 100 runs that require machines in specific poolsconst { data: result, error } = await client.runs.bulkCreate({ workflow_id: 'workflow-uuid', count: 100, pool_ids: ['customer-a-pool-id', 'excel-pool-id'], input_values: { task_type: 'data_extraction', priority: 'high' }});if (result) { console.log(`Created ${result.created_runs.length} runs`); console.log(`Failed: ${result.failed_count}`); // All runs will execute on machines in both specified pools}
Bulk Run Assignment: When bulk creating runs with pool requirements, Cyberdesk attempts to assign each run to any available machine that meets the pool criteria. If no matching machine is available, runs remain in scheduling until one is free. No specific load balancing guarantees are made.
At its core, a session is a reservation of a single machine. While a session is active, that machine is dedicated to your session only — no unrelated runs will be scheduled onto it. This guarantees your multi‑step automations run back‑to‑back on the same desktop without interference.What you get from a session:
Exclusive access to one machine for the session’s duration (strong scheduling guarantee)
Deterministic “step 1 → step 2 → …” behavior with no opportunistic interleaving
Chains are a convenient way to create multiple runs that execute back‑to‑back in the same session. Instead of manually creating individual runs and managing their sequencing, you can define all your workflow steps upfront and let Cyberdesk handle the session management and execution order.
EHR workflows: Log into Epic, navigate to a specific patient, extract their data, then upload documents to their chart — all with no interruptions from other miscellaneous runs.
Financial reporting: Export monthly reports from your ERP system, transform the data in Excel, then re‑import the processed results — all back‑to‑back without interference.
Document processing: Download files from a web portal, process them with a local application, then upload the results back — ensuring no other runs interfere with your workflow.
Once you have multiple workflows running in the same session, you’ll often want to pass outputs from earlier steps as inputs to later ones. Refs make this seamless — simply reference a previous step’s output using a JSON object:
Copy
Ask AI
{ "$ref": "step1.outputs.result" }
The SDK type for this shape is RefValue (exported), but a plain object with a top‑level $ref string also works. The path on the right points to a prior step’s output field.
Ad‑hoc sessions without a chain (start with a single run, then add more)
You don’t have to use a chain to benefit from sessions. You can start a session with a single run and then submit additional runs that reference the same session_id.
Copy
Ask AI
// 1) Start a brand new session using a normal runconst { data: warmup } = await client.runs.create({ workflow_id: 'login-workflow-id', pool_ids: ['customer-a'], start_session: true, // Reserve a machine and begin a session input_values: { username: 'alice' }})// Get the session to reuse and the reserved machineconst sessionId = warmup.session_id!// 2) Run the next workflow in the same session (no other runs will interleave)const { data: step2 } = await client.runs.create({ workflow_id: 'search-workflow-id', session_id: sessionId, // Guarantees same machine & back‑to‑back scheduling input_values: { query: { $ref: 'step1.outputs.result' } // Refs are resolved server‑side within a session }})// 3) Final run that releases the session when completeconst { data: final } = await client.runs.create({ workflow_id: 'cleanup-workflow-id', session_id: sessionId, release_session_after: true, // Release the session after this run completes input_values: { cleanup: 'true' }})
This approach is ideal when the next steps depend on external conditions (e.g., decide at runtime which workflow to run next) or when you want to keep the session open for a while and feed runs one at a time.
Automatic session release with release_session_after
When creating individual runs in a session (not using chains), you can use release_session_after: true to automatically release the session when that run completes (regardless of success or failure):
Copy
Ask AI
// This run will release the session after it completesconst { data: finalRun } = await client.runs.create({ workflow_id: 'final-workflow-id', session_id: existingSessionId, release_session_after: true, input_values: { finalize: 'true' }})
This is useful mainly as a convenience, so you don’t have to decouple creating a session ending run and actually ending the session.Note: The session is released when the run completes, whether it succeeds, fails, or is cancelled. This ensures the session doesn’t remain locked if something goes wrong.
Login + Work (Exclusive): Reserve a session, log into a thick client once, then run 5 workflows in sequence. No other jobs will touch that machine mid‑sequence.
Search + Process with Refs: Step 1 finds a record; Step 2 uses {$ref: 'step1.outputs.id'} to open/process; Step 3 posts results. All on the same desktop.
Download → Transform → Export: Files created by Step 1 are visible to Steps 2/3 because the session keeps the same working directory.
If you provide a machine_id in a bulk run request, pool_ids are ignored for those runs. Each run will only target the specified machine; if it is busy, the run will wait for that machine rather than falling back to other machines or pools.
Important: While the SDK provides full CRUD operations for all Cyberdesk resources, we strongly recommend using the Cyberdesk Dashboard for managing these resources. The dashboard provides a more intuitive interface for:
Creating and editing workflows
Managing machines
Viewing connections
Analyzing trajectories
The SDK methods below are provided for advanced use cases and automation scenarios.
Pools
Copy
Ask AI
import type { PoolCreate, PoolUpdate, MachinePoolUpdate } from 'cyberdesk';// List poolsconst { data: pools } = await client.pools.list();// Create a poolconst { data: pool } = await client.pools.create({ name: 'Customer A', description: 'All machines for Customer A'});// Get a pool (with optional machine list)const { data: poolWithMachines } = await client.pools.get('pool-id', true);// Update a poolconst { data: updated } = await client.pools.update('pool-id', { description: 'Updated description'});// Add machines to a poolconst { data: updatedPool } = await client.pools.addMachines('pool-id', { machine_ids: ['machine-1', 'machine-2']});// Remove machines from a poolawait client.pools.removeMachines('pool-id', { machine_ids: ['machine-1']});// Get pools for a machineconst { data: machinePools } = await client.machines.getPools('machine-id');// Update a machine's poolsconst { data: machine } = await client.machines.updatePools('machine-id', { pool_ids: ['pool-1', 'pool-2', 'pool-3']});// Delete a poolawait client.pools.delete('pool-id');
Machines
Copy
Ask AI
// List machinesconst { data: machines } = await client.machines.list();// Create a machineconst { data: machine } = await client.machines.create({ name: 'Epic EHR Machine', description: 'Production Epic environment'});// Get a machineconst { data: machine } = await client.machines.get('machine-id');// Update a machineconst { data: updated } = await client.machines.update('machine-id', { name: 'Updated Name'});// Delete a machineawait client.machines.delete('machine-id');
Workflows
Copy
Ask AI
// List workflowsconst { data: workflows } = await client.workflows.list();// Create a workflowconst { data: workflow } = await client.workflows.create({ name: 'Patient Data Extraction', description: 'Extracts patient demographics and medications', main_prompt: 'Navigate to patient chart and extract data'});// Get a workflowconst { data: workflow } = await client.workflows.get('workflow-id');// Update a workflowconst { data: updated } = await client.workflows.update('workflow-id', { description: 'Updated description'});// Delete a workflowawait client.workflows.delete('workflow-id');