Quick Start

Get up and running with Cyberdesk in under 5 minutes. This guide assumes you’ve already created workflows in the Cyberdesk Dashboard.
1

Install the SDK

npm install cyberdesk
2

Initialize the client and create a run

import { createCyberdeskClient } from 'cyberdesk';

// Initialize the client
const client = createCyberdeskClient('YOUR_API_KEY');

// Create a run for your workflow
const { data: run } = await client.runs.create({
  workflow_id: 'your-workflow-id',
  machine_id: 'your-machine-id',
  input_values: {
    patient_id: '12345',
    patient_first_name: 'John',
    patient_last_name: 'Doe'
  }
});

// Wait for the run to complete
let status = run.status;
while (status === 'scheduling' || status === 'running') {
  await new Promise(resolve => setTimeout(resolve, 5000)); // Wait 5 seconds
  const { data: updatedRun } = await client.runs.get(run.id);
  status = updatedRun.status;
}

// Get the output data
if (status === 'success') {
  console.log('Patient data:', updatedRun.output_data);
} else {
  console.error('Run failed:', updatedRun.error?.join(', '));
}
We recommend creating and managing workflows through the Cyberdesk Dashboard. The dashboard editor supports rich, multimodal prompts — you can add screenshots or UI snippets directly into your prompt to guide the agent. The SDK is optimized for executing runs against your existing workflows.

Installation & Setup

Prerequisites

  • Node.js 14.0 or higher
  • TypeScript 4.0 or higher (for TypeScript projects)

Installation

npm install cyberdesk

TypeScript Configuration

The SDK includes TypeScript definitions out of the box. For the best experience, ensure your tsconfig.json includes:
tsconfig.json
{
  "compilerOptions": {
    "target": "ES2020",
    "module": "commonjs",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true
  }
}

Authentication

Creating a Client

import { createCyberdeskClient } from 'cyberdesk';

const client = createCyberdeskClient('YOUR_API_KEY');

Custom Base URL

For self-hosted or enterprise deployments:
const client = createCyberdeskClient('YOUR_API_KEY', 'https://api.your-domain.com');
Never hardcode API keys in your source code. Use environment variables:
const client = createCyberdeskClient(process.env.CYBERDESK_API_KEY!);

Working with Runs

Runs are the primary way to execute workflows in Cyberdesk. Here’s everything you need to know about managing runs through the SDK.

Creating a Run

const { data: run, error } = await client.runs.create({
  workflow_id: 'workflow-uuid',
  machine_id: 'machine-uuid',
  input_values: {
    // Your workflow-specific input data
    patient_id: '12345',
    patient_first_name: 'John',
    patient_last_name: 'Doe'
  }
});

if (error) {
  console.error('Failed to create run:', error);
} else {
  console.log('Run created:', run.id);
}

Creating a Run with Sensitive Input Values

If your workflow prompt references sensitive variables using the {$variable} syntax (for example, {$password}), you can pass those values separately via sensitive_input_values.
const { data: run, error } = await client.runs.create({
  workflow_id: 'workflow-uuid',
  machine_id: 'machine-uuid',
  input_values: {
    // non-sensitive inputs
    patient_id: '12345'
  },
  sensitive_input_values: {
    // sensitive inputs referenced in your prompt as {$password}
    password: 's3cr3tP@ss'
  }
});
Sensitive inputs are stored in a secure third‑party secret vault (Basis Theory) only for the duration of the run. They are not logged in Cyberdesk, and they are not sent to any LLMs. The values are only resolved at the last moment during actual computer actions (e.g., when typing). After the run completes, these sensitive values are deleted from the vault. On the dashboard, sensitive inputs are never displayed and will not be prefilled when repeating a run.

Creating a Run with Machine Pools

You can specify pool requirements when creating a run. This ensures your run is executed on a machine that belongs to ALL specified pools. This is especially useful for:
  • Running workflows on customer-specific machines
  • Requiring machines with specific software installed
  • Organizing machines by location or capability
// Get pool IDs (typically from your configuration or database)
const customerPoolId = 'pool-uuid-1';  // e.g., "Customer A" pool
const excelPoolId = 'pool-uuid-2';     // e.g., "Has Excel" pool

const { data: run, error } = await client.runs.create({
  workflow_id: 'workflow-uuid',
  // Machine must be in BOTH pools (intersection, not union)
  pool_ids: [customerPoolId, excelPoolId],
  input_values: {
    patient_id: '12345',
    patient_first_name: 'John',
    patient_last_name: 'Doe'
  }
});

if (error) {
  console.error('Failed to create run:', error);
} else {
  console.log('Run created:', run.id);
  console.log('Will execute on machine in pools:', [customerPoolId, excelPoolId]);
}
Pool Matching Logic: When you specify multiple pools, Cyberdesk will only select machines that belong to ALL specified pools (intersection). For example, if you specify ["Customer A", "Has Excel"], only machines that are in both pools will be considered.
If you provide a machine_id when creating a run, pool_ids are ignored. Cyberdesk will only attempt the specified machine; if it’s busy or unavailable, the run will wait until that machine is free (no fallback to other machines or pools).
Creating and Managing Pools: While you can manage pools via the SDK, we recommend using the Cyberdesk Dashboard for a more intuitive experience:
  1. Navigate to any machine in the dashboard
  2. Click on the machine to view its details
  3. Add the machine to existing pools or create new pools
  4. Assign multiple pools to organize machines by customer, capability, or location
Common pool strategies:
  • By Customer: “Customer A”, “Customer B”, etc.
  • By Software: “Has Excel”, “Has Chrome”, “Has Epic EHR”
  • By Environment: “Production”, “Staging”, “Development”
  • By Location: “US-East”, “EU-West”, etc.

Creating a Run with File Inputs

You can attach files to a run at creation. This is useful for workflows that need to process or manipulate files on the remote machine.
import { promises as fs } from 'fs';
import type { FileInput } from 'cyberdesk';

// Read a file and convert it to base64
const fileBuffer = await fs.readFile('path/to/your/file.txt');
const content = fileBuffer.toString('base64');

const fileInputs: FileInput[] = [
  {
    filename: 'file.txt',
    content: content,
    target_path: 'C:/Users/Default/Desktop/file.txt', // Optional
    cleanup_imports_after_run: true // Optional
  }
];

const { data: run, error } = await client.runs.create({
  workflow_id: 'workflow-uuid',
  file_inputs: fileInputs
});

if (error) {
  console.error('Failed to create run:', error);
} else {
  console.log('Run created with file attachment:', run.id);
}
filename
string
required
The name of the file, including its extension.
content
string
required
The base64-encoded content of the file.
target_path
string
The absolute path on the remote machine where the file should be saved. If not provided, it defaults to ~/CyberdeskTransfers/.
cleanup_imports_after_run
boolean
If true, the file will be deleted from the remote machine after the run completes (whether it succeeds or fails). Defaults to false.

Listing Runs

// List all runs
const { data: runs } = await client.runs.list();

// List with pagination
const { data: paginatedRuns } = await client.runs.list({
  skip: 0,
  limit: 20
});

// Filter by status
const { data: completedRuns } = await client.runs.list({
  status: 'success'
});

// Filter by workflow
const { data: workflowRuns } = await client.runs.list({
  workflow_id: 'workflow-uuid'
});

Faster lists with fields projection

Return only selected fields using the fields option. This avoids large payloads (like run_message_history) and speeds up responses.
  • Base fields always included: id, workflow_id, machine_id, status, created_at.
  • Add more by passing the fields array.
// Minimal (base fields only)
const { data: minimal } = await client.runs.list();

// Include inputs only
const { data: inputsOnly } = await client.runs.list({
  fields: ['input_values'] // or [RunField.input_values] if using enum from SDK
});

// Include a couple specific fields
const { data: some } = await client.runs.list({
  fields: ['input_values', 'session_id']
});

// Include attachments but skip history for speed
const { data: noHistory } = await client.runs.list({
  fields: ['input_attachment_ids', 'output_attachment_ids']
});

Getting a Specific Run

const { data: run, error } = await client.runs.get('run-uuid');

if (run) {
  console.log('Run status:', run.status);
  console.log('Output data:', run.output_data);
}

Updating a Run

Run updates are typically handled automatically by the Cyberdesk system. Manual updates are rarely needed.
const { data: updatedRun } = await client.runs.update('run-uuid', {
  status: 'cancelled'
});

Deleting a Run

const { error } = await client.runs.delete('run-uuid');

if (!error) {
  console.log('Run deleted successfully');
}

Retrying a Run (same run_id)

Use retry when you want to re-run the exact same run id, clearing outputs and optionally providing fresh inputs/files. Sensitive values: always re-send sensitive_input_values on retry; secrets are deleted after each run. File inputs: if you set cleanup_imports_after_run, files are deleted from the remote machine after the run; include file_inputs again if you need a fresh copy or when no input attachments exist (providing file_inputs replaces prior input attachments). Regular inputs: only send input_values if you want to change them; otherwise the previous ones are reused.
// Replace inputs/files/sensitive values as needed; keeps same run_id
const { data: retried, error } = await client.runs.retry('run-uuid', {
  // optional overrides
  input_values: { query: 'new query' },
  sensitive_input_values: { password: process.env.APP_PASSWORD! },
  file_inputs: [
    // providing file_inputs replaces prior input attachments
    { filename: 'input.pdf', content: base64Pdf }
  ],
  // session controls (all optional)
  reuse_session: true,          // default: keep existing session
  // session_id: 'existing-session-uuid',
  // release_session_after: true,
  // machine selection (optional)
  // machine_id: 'specific-machine-uuid',
  // pool_ids: ['pool-a', 'pool-b'], // used only when no machine_id is set
});

if (error) {
  // Active runs (scheduling/running) cannot be retried
  console.error('Failed to retry run:', error);
}
Behavior:
  • Retry is allowed only for terminal runs: success, error, or cancelled.
  • Outputs, history, and output attachments are always cleared.
  • Prior input attachments are kept unless you provide file_inputs (then they are replaced).
  • If you provide sensitive_input_values, new secrets are created; otherwise sensitive aliases are cleared.
  • When a session_id is present and the session is busy, immediate assignment is skipped and the retried run queues.
  • When machine_id is provided, pool_ids are ignored.

Polling for Run Completion

Here’s a robust pattern for waiting for runs to complete:
async function waitForRunCompletion(client: any, runId: string, timeoutMs = 300000) {
  const startTime = Date.now();
  const pollInterval = 5000; // 5 seconds

  while (Date.now() - startTime < timeoutMs) {
    const { data: run, error } = await client.runs.get(runId);
    
    if (error) {
      throw new Error(`Failed to get run status: ${error}`);
    }

    if (run.status === 'success') {
      return run;
    }
    
    if (run.status === 'error' || run.status === 'cancelled') {
      throw new Error(`Run ${run.status}: ${run.error?.join(', ') || 'Unknown error'}`);
    }

    await new Promise(resolve => setTimeout(resolve, pollInterval));
  }

  throw new Error('Run timed out');
}

// Usage
try {
  const completedRun = await waitForRunCompletion(client, run.id);
  console.log('Output:', completedRun.output_data);
} catch (error) {
  console.error('Run failed:', error);
}

Working with File Attachments

Manage files associated with your runs, such as input files uploaded at creation or output files generated by a workflow.

Listing Run Attachments

You can list all attachments for a specific run and filter them by type (input or output).
// List all attachments for a run
const { data: attachments } = await client.run_attachments.list({
  run_id: 'run-uuid'
});

// List only output attachments
const { data: outputFiles } = await client.run_attachments.list({
  run_id: 'run-uuid',
  attachment_type: 'output'
});

Downloading an Attachment

There are two ways to download attachments depending on your use case:

Method 1: Get a Download URL

Get a signed URL that triggers automatic download when accessed. Perfect for web applications where you want to provide download links to users.
// Get a download URL with custom expiration (default: 5 minutes)
const { data } = await client.run_attachments.getDownloadUrl(
  'attachment-uuid',
  600  // 10 minutes
);

if (data) {
  console.log(`Download URL: ${data.url}`);
  console.log(`Expires in: ${data.expires_in} seconds`);
  
  // You can use this URL in your web app
  // For example, in a React component:
  // <a href={data.url} download>Download File</a>
}

Method 2: Download Raw File Content

Download the file content directly. The SDK will return the raw data which you can then save to a file or process further.
import { promises as fs } from 'fs';

// Get the attachment metadata first
const { data: attachmentInfo } = await client.run_attachments.get('attachment-uuid');

if (attachmentInfo) {
  // Download the file content
  const { data: fileData, error } = await client.run_attachments.download(attachmentInfo.id);

  if (fileData) {
    // For Node.js: Save to file
    const buffer = Buffer.from(fileData);
    await fs.writeFile(attachmentInfo.filename, buffer);
    console.log(`Downloaded ${attachmentInfo.filename}`);
    
    // For browsers: Create a Blob and download
    // const blob = new Blob([fileData]);
    // const url = URL.createObjectURL(blob);
    // const a = document.createElement('a');
    // a.href = url;
    // a.download = attachmentInfo.filename;
    // a.click();
  }
}

Example: Upload, Process, and Download

Here’s a full example of a workflow that processes a file.
  1. Workflow Prompt: "Take the file at ~/CyberdeskTransfers/report.txt, add a summary to the end of it, and mark it for export."
  2. Workflow Setting: includes_file_exports is set to true.
import { createCyberdeskClient, FileInput } from 'cyberdesk';
import { promises as fs } from 'fs';

async function main() {
  const client = createCyberdeskClient('YOUR_API_KEY');

  // 1. Prepare and upload the input file
  const reportContent = "This is the initial report content.";
  const encodedContent = Buffer.from(reportContent).toString('base64');

  const { data: run } = await client.runs.create({
    workflow_id: "your-file-processing-workflow-id",
    file_inputs: [{ filename: "report.txt", content: encodedContent }]
  });
  console.log(`Run started: ${run.id}`);

  // 2. Wait for the run to complete
  const completedRun = await waitForRunCompletion(client, run.id);
  console.log(`Run finished with status: ${completedRun.status}`);

  // 3. Find and download the output attachment
  if (completedRun.status === 'success') {
    const { data: outputAttachments } = await client.run_attachments.list({
      run_id: completedRun.id,
      attachment_type: 'output'
    });
    
    if (outputAttachments?.items?.length) {
      const processedReport = outputAttachments.items[0];
      
      // Option 1: Get a download URL (for web apps)
      const { data: urlData } = await client.run_attachments.getDownloadUrl(processedReport.id);
      if (urlData) {
        console.log(`Download URL: ${urlData.url}`);
        console.log(`Valid for: ${urlData.expires_in} seconds`);
      }
      
      // Option 2: Download the processed file directly
      const { data: fileData } = await client.run_attachments.download(processedReport.id);
      
      if (fileData) {
        const processedContent = new TextDecoder().decode(fileData);
        console.log("\n--- Processed Report ---");
        console.log(processedContent);
        console.log("------------------------");
      }
    } else {
      console.log("No output files were generated.");
    }
  }
}

// Assuming waitForRunCompletion is defined as in the previous examples
main();
This example demonstrates the complete lifecycle: uploading a file with a run, executing a workflow that modifies it, and then retrieving the processed file from the run’s output attachments.

Bulk Creating Runs with Pools

When creating multiple runs in bulk, you can also specify pool requirements. All runs will be distributed across machines that match the pool criteria.
// Create 100 runs that require machines in specific pools
const { data: result, error } = await client.runs.bulkCreate({
  workflow_id: 'workflow-uuid',
  count: 100,
  pool_ids: ['customer-a-pool-id', 'excel-pool-id'],
  input_values: {
    task_type: 'data_extraction',
    priority: 'high'
  }
});

if (result) {
  console.log(`Created ${result.created_runs.length} runs`);
  console.log(`Failed: ${result.failed_count}`);
  // All runs will execute on machines in both specified pools
}
Bulk Run Assignment: When bulk creating runs with pool requirements, Cyberdesk attempts to assign each run to any available machine that meets the pool criteria. If no matching machine is available, runs remain in scheduling until one is free. No specific load balancing guarantees are made.

Sessions and Chained Runs

At its core, a session is a reservation of a single machine. While a session is active, that machine is dedicated to your session only — no unrelated runs will be scheduled onto it. This guarantees your multi‑step automations run back‑to‑back on the same desktop without interference. What you get from a session:
  • Exclusive access to one machine for the session’s duration (strong scheduling guarantee)
  • Deterministic “step 1 → step 2 → …” behavior with no opportunistic interleaving
Chains are a convenient way to create multiple runs that execute back‑to‑back in the same session. Instead of manually creating individual runs and managing their sequencing, you can define all your workflow steps upfront and let Cyberdesk handle the session management and execution order.

Real‑world cases that require sessions

  • EHR workflows: Log into Epic, navigate to a specific patient, extract their data, then upload documents to their chart — all with no interruptions from other miscellaneous runs.
  • Financial reporting: Export monthly reports from your ERP system, transform the data in Excel, then re‑import the processed results — all back‑to‑back without interference.
  • Document processing: Download files from a web portal, process them with a local application, then upload the results back — ensuring no other runs interfere with your workflow.

Passing data between steps with refs

Once you have multiple workflows running in the same session, you’ll often want to pass outputs from earlier steps as inputs to later ones. Refs make this seamless — simply reference a previous step’s output using a JSON object:
{ "$ref": "step1.outputs.result" }
The SDK type for this shape is RefValue (exported), but a plain object with a top‑level $ref string also works. The path on the right points to a prior step’s output field.

Start a new session and run a chain (best when you know the whole sequence)

import { createCyberdeskClient, type WorkflowChainCreate } from 'cyberdesk'

const client = createCyberdeskClient(process.env.CYBERDESK_API_KEY!)

const chain: WorkflowChainCreate = {
  // Optional shared inputs are applied only to steps whose workflows declare those variables
  shared_inputs: {
    search_query: 'red panda facts'
  },
  // Optional shared sensitive inputs available to all steps
  shared_sensitive_inputs: {
    api_key: 'shared-secret-key'
  },
  // Attach files once at the beginning of the chain (applied to the first run)
  shared_file_inputs: [
    // { filename: 'seed.txt', content: 'base64-...' }
  ],
  // Reserve a machine for the whole chain (either machine_id OR pool_ids)
  pool_ids: ['pool-with-chrome', 'customer-a'],
  keep_session_after_completion: false,
  steps: [
    {
      workflow_id: 'step-1-workflow-id',
      session_alias: 'step1',
      inputs: {
        topic: 'red panda',
      },
      sensitive_inputs: {
        username: 'user1',  // Step-specific sensitive input
        password: 'secret123'
      }
    },
    {
      workflow_id: 'step-2-workflow-id',
      session_alias: 'step2',
      inputs: {
        // Use output of step1 as an input to step2
        search_query: { $ref: 'step1.outputs.result' }
      },
      sensitive_inputs: {
        security_token: 'step2-token'  // Step-specific sensitive input
      }
    }
  ]
}

const { data: chainResult, error } = await client.runs.chain(chain)
if (error) throw new Error(String(error))

console.log('Session:', chainResult.session_id)
console.log('Run IDs:', chainResult.run_ids)
Notes:
  • Provide machine_id to target a specific machine, or pool_ids to let Cyberdesk choose any machine that belongs to all specified pools (intersection).
  • The chain always runs on one reserved session. If you omit session_id, the API creates one for you and reserves a machine before step 1 starts.
  • shared_inputs are automatically filtered per workflow so each step only receives the variables it actually declares.
  • shared_sensitive_inputs are available to all steps, while sensitive_inputs in individual steps provide step-specific sensitive values.
  • shared_file_inputs are attached to the first run in the chain.

Join an existing session

If you already have a reserved session (e.g., created by a prior chain), you can reuse it:
const { data: chainResult } = await client.runs.chain({
  session_id: 'existing-session-uuid',
  steps: [
    { workflow_id: 'wf-a', session_alias: 'warmup' },
    { workflow_id: 'wf-b', session_alias: 'extract', inputs: { query: 'current patient' } },
  ]
})
This keeps the same reserved machine and any state/files already present on it. machine_id/pool_ids are ignored when session_id is provided.

Keep the session alive after the chain

If you want to leave the reservation active for a follow‑up chain or ad‑hoc steps:
await client.runs.chain({
  pool_ids: ['customer-a'],
  keep_session_after_completion: true,
  steps: [ /* ... */ ]
})
Later, you can start a new chain with that session_id to continue from where you left off.

Ad‑hoc sessions without a chain (start with a single run, then add more)

You don’t have to use a chain to benefit from sessions. You can start a session with a single run and then submit additional runs that reference the same session_id.
// 1) Start a brand new session using a normal run
const { data: warmup } = await client.runs.create({
  workflow_id: 'login-workflow-id',
  pool_ids: ['customer-a'],
  start_session: true,           // Reserve a machine and begin a session
  input_values: { username: 'alice' }
})

// Get the session to reuse and the reserved machine
const sessionId = warmup.session_id!

// 2) Run the next workflow in the same session (no other runs will interleave)
const { data: step2 } = await client.runs.create({
  workflow_id: 'search-workflow-id',
  session_id: sessionId,         // Guarantees same machine & back‑to‑back scheduling
  input_values: {
    query: { $ref: 'step1.outputs.result' } // Refs are resolved server‑side within a session
  }
})

// 3) Final run that releases the session when complete
const { data: final } = await client.runs.create({
  workflow_id: 'cleanup-workflow-id',
  session_id: sessionId,
  release_session_after: true,  // Release the session after this run completes
  input_values: { cleanup: 'true' }
})
This approach is ideal when the next steps depend on external conditions (e.g., decide at runtime which workflow to run next) or when you want to keep the session open for a while and feed runs one at a time.

Automatic session release with release_session_after

When creating individual runs in a session (not using chains), you can use release_session_after: true to automatically release the session when that run completes (regardless of success or failure):
// This run will release the session after it completes
const { data: finalRun } = await client.runs.create({
  workflow_id: 'final-workflow-id',
  session_id: existingSessionId,
  release_session_after: true,
  input_values: { finalize: 'true' }
})
This is useful mainly as a convenience, so you don’t have to decouple creating a session ending run and actually ending the session. Note: The session is released when the run completes, whether it succeeds, fails, or is cancelled. This ensures the session doesn’t remain locked if something goes wrong.

Polling chain runs

The chain API returns run_ids in creation order; you can poll them individually, or receive a webhook when any of those runs complete
const { data: chainRes } = await client.runs.chain(chain)
for (const runId of chainRes.run_ids) {
  const run = await waitForRunCompletion(client, runId)
  console.log(run.status, run.output_data)
}

Real‑world patterns

  • Login + Work (Exclusive): Reserve a session, log into a thick client once, then run 5 workflows in sequence. No other jobs will touch that machine mid‑sequence.
  • Search + Process with Refs: Step 1 finds a record; Step 2 uses {$ref: 'step1.outputs.id'} to open/process; Step 3 posts results. All on the same desktop.
  • Download → Transform → Export: Files created by Step 1 are visible to Steps 2/3 because the session keeps the same working directory.
If you provide a machine_id in a bulk run request, pool_ids are ignored for those runs. Each run will only target the specified machine; if it is busy, the run will wait for that machine rather than falling back to other machines or pools.

Real-World Example: Healthcare Integration

Here’s a complete example of retrieving patient data from an Epic EHR system using Cyberdesk:
import { createCyberdeskClient } from 'cyberdesk';

async function getPatientData(patientId: string, firstName: string, lastName: string) {
  const client = createCyberdeskClient(process.env.CYBERDESK_API_KEY!);

  try {
    // Create a run to fetch patient data
    const { data: run, error } = await client.runs.create({
      workflow_id: '550e8400-e29b-41d4-a716-446655440000',  // Your Epic workflow ID
      machine_id: '550e8400-e29b-41d4-a716-446655440001',   // Your Epic machine ID
      input_data: {
        patient_id: patientId,
        patient_first_name: firstName,
        patient_last_name: lastName
      }
    });

    if (error) {
      throw new Error(`Failed to create run: ${error}`);
    }

    console.log(`Fetching data for patient ${firstName} ${lastName} (${patientId})...`);

    // Wait for completion
    const completedRun = await waitForRunCompletion(client, run.id, 120000); // 2 minute timeout

    // Process the patient data
    const patientData = completedRun.output_data;
    
    return {
      patientId: patientId,
      demographics: patientData.demographics,
      medications: patientData.medications,
      vitals: patientData.recentVitals,
      lastUpdated: patientData.lastUpdated
    };

  } catch (error) {
    console.error('Error fetching patient data:', error);
    throw error;
  }
}

// Express.js route example
app.post('/api/patients/lookup', async (req, res) => {
  try {
    const { patient_id, first_name, last_name } = req.body;
    const patientData = await getPatientData(patient_id, first_name, last_name);
    res.json(patientData);
  } catch (error) {
    res.status(500).json({ error: 'Failed to fetch patient data' });
  }
});

Other SDK Resources

Important: While the SDK provides full CRUD operations for all Cyberdesk resources, we strongly recommend using the Cyberdesk Dashboard for managing these resources. The dashboard provides a more intuitive interface for:
  • Creating and editing workflows
  • Managing machines
  • Viewing connections
  • Analyzing trajectories
The SDK methods below are provided for advanced use cases and automation scenarios.
import type { PoolCreate, PoolUpdate, MachinePoolUpdate } from 'cyberdesk';

// List pools
const { data: pools } = await client.pools.list();

// Create a pool
const { data: pool } = await client.pools.create({
  name: 'Customer A',
  description: 'All machines for Customer A'
});

// Get a pool (with optional machine list)
const { data: poolWithMachines } = await client.pools.get('pool-id', true);

// Update a pool
const { data: updated } = await client.pools.update('pool-id', {
  description: 'Updated description'
});

// Add machines to a pool
const { data: updatedPool } = await client.pools.addMachines('pool-id', {
  machine_ids: ['machine-1', 'machine-2']
});

// Remove machines from a pool
await client.pools.removeMachines('pool-id', {
  machine_ids: ['machine-1']
});

// Get pools for a machine
const { data: machinePools } = await client.machines.getPools('machine-id');

// Update a machine's pools
const { data: machine } = await client.machines.updatePools('machine-id', {
  pool_ids: ['pool-1', 'pool-2', 'pool-3']
});

// Delete a pool
await client.pools.delete('pool-id');
// List machines
const { data: machines } = await client.machines.list();

// Create a machine
const { data: machine } = await client.machines.create({
  name: 'Epic EHR Machine',
  description: 'Production Epic environment'
});

// Get a machine
const { data: machine } = await client.machines.get('machine-id');

// Update a machine
const { data: updated } = await client.machines.update('machine-id', {
  name: 'Updated Name'
});

// Delete a machine
await client.machines.delete('machine-id');
// List workflows
const { data: workflows } = await client.workflows.list();

// Create a workflow
const { data: workflow } = await client.workflows.create({
  name: 'Patient Data Extraction',
  description: 'Extracts patient demographics and medications',
  main_prompt: 'Navigate to patient chart and extract data'
});

// Get a workflow
const { data: workflow } = await client.workflows.get('workflow-id');

// Update a workflow
const { data: updated } = await client.workflows.update('workflow-id', {
  description: 'Updated description'
});

// Delete a workflow
await client.workflows.delete('workflow-id');
// List connections
const { data: connections } = await client.connections.list();

// Create a connection
const { data: connection } = await client.connections.create({
  machine_id: 'machine-id'
});

// Filter by machine
const { data: machineConnections } = await client.connections.list({
  machine_id: 'machine-id',
  status: 'active'
});
// List trajectories
const { data: trajectories } = await client.trajectories.list();

// Get a trajectory
const { data: trajectory } = await client.trajectories.get('trajectory-id');

// Get latest trajectory for a workflow
const { data: latest } = await client.trajectories.getLatestForWorkflow('workflow-id');

// Create a trajectory
const { data: trajectory } = await client.trajectories.create({
  workflow_id: 'workflow-id',
  steps: []
});

// Update a trajectory
const { data: updated } = await client.trajectories.update('trajectory-id', {
  steps: [/* updated steps */]
});

// Delete a trajectory
await client.trajectories.delete('trajectory-id');

Error Handling

All SDK methods return an object with data and error properties:
const { data, error } = await client.runs.create({
  workflow_id: 'workflow-id',
  machine_id: 'machine-id'
});

if (error) {
  // Handle error
  console.error('Error details:', error);
} else {
  // Use data
  console.log('Run created:', data.id);
}

Common Error Types

ValidationError
object
Invalid input parameters
{
  error: {
    message: "Validation failed",
    details: {
      workflow_id: "Invalid UUID format"
    }
  }
}
AuthenticationError
object
Invalid or missing API key
{
  error: {
    message: "Authentication failed",
    status: 401
  }
}
RateLimitError
object
Too many requests
{
  error: {
    message: "Rate limit exceeded",
    status: 429,
    retryAfter: 60
  }
}

TypeScript Types

The SDK exports all types for better IDE support:
import type {
  MachineResponse,
  PoolResponse,
  PoolCreate,
  PoolUpdate,
  MachinePoolUpdate,
  WorkflowResponse,
  RunResponse,
  RunStatus,
  MachineStatus,
  ConnectionStatus
} from 'cyberdesk';

// Use types in your code
function handleRun(run: RunResponse) {
  if (run.status === 'success') {
    // TypeScript knows output_data exists
    console.log(run.output_data);
  }
}

Best Practices

Use Environment Variables

Store API keys and workflow IDs in environment variables, never in code.

Implement Retry Logic

Add exponential backoff for transient failures and rate limits.

Handle Timeouts

Set reasonable timeouts for run completion based on your workflow complexity.

Log Everything

Keep detailed logs of run IDs and statuses for debugging and audit trails.

Next Steps