Skip to main content
Model configuration determines which AI models run your workflows. You can set a default model for the entire workflow, and optionally override the model per-action directly in your prompts for focused_action and extract_prompt operations. The runtime applies defaults differently depending on the action:
  • focused_action inherits the workflow’s main agent model unless you override it
  • extract_prompt does not inherit the workflow’s main agent model; without an override, it uses Cyberdesk’s extraction default (currently Gemini 3 Pro Preview (Low))

Workflow-Level Model Configuration

When you create or edit a workflow in the dashboard, you can select which model the main agent should use.

Selecting a Model

  1. Open the Workflows page in your dashboard
  2. Click to create a new workflow or edit an existing one
  3. In the workflow editor, find the Model selector
  4. Choose from the available model configurations
  5. Save your workflow
If you don’t select a model, Cyberdesk uses the system default (currently Sonnet 4.6 with adaptive thinking at medium effort).

System Default Models

Cyberdesk provides pre-configured system defaults optimized for different tasks:
ModelProviderBest For
Sonnet 4.6 (Medium)AnthropicMain agent (default)
Sonnet 4.6AnthropicFast non-thinking path
Sonnet 4.6 (Low)AnthropicReduced thinking depth with high intelligence
Sonnet 4.6 (High)AnthropicDeeper adaptive reasoning
Gemini 3 Pro Preview (Low)Googleextract_prompt default
GPT-5 MiniOpenAICache detection
Vertex Sonnet 4.5 (Thinking)Google Vertex AI (Anthropic)Fallback 1 when primary fails
Bedrock Sonnet 4.5 (Thinking)AWS BedrockFallback 2 when primary and Fallback 1 fail

Per-Action Model Overrides

The most powerful feature of model configuration is the ability to specify a different model for individual actions directly in your workflow prompts. This works for:
  • focused_action — dynamic decisions and observations
  • extract_prompt — vision-based data extraction from screenshots

How to Specify a Model in Your Prompt

Use the model="Model Name" parameter in your prompt text:
Take a screenshot with extract_prompt="Extract all invoice data as JSON" and model="Sonnet 4.5"
Use focused_action with model="Sonnet 4 (Thinking)" to find and click on the patient 
whose name is {patient_name}

Using the Model Picker in the Prompt Editor

The prompt editor provides easy access to the model picker:
  1. Slash menu: Type / and select “Model Override” to insert model=""
  2. Tab autocomplete: Start typing model and press Tab to autocomplete
  3. Direct typing: Type model="" and place your cursor inside the quotes
Once your cursor is inside the model="" quotes, a dropdown appears showing all available models. Use arrow keys to navigate, and press Enter or Tab to select. Choosing System Default clears the per-action override.
Hover over a model in the dropdown to see its details, including whether it supports computer use, the provider, and configuration parameters.

Computer Use Models vs. Extraction Models

Important: For focused_action, prefer models that are marked as computer use models. The model picker indicates which models support computer use, and the editor warns if you pick a model that is not marked for computer use.For extract_prompt (vision-based extraction), any configured vision-capable model can be used.
When you select a non-computer-use model, you’ll see a toast warning:
“This model isn’t a known computer use model. Only use this for screenshots with extract_prompt.”

Example: Hybrid Model Strategy

Use different models for different parts of your workflow:
Navigate to the invoice details page.

Use focused_action with model="Sonnet 4 (Thinking)" to verify the invoice 
status shows "Approved" before proceeding.

Take a screenshot with extract_prompt="Extract all line items as JSON: 
{item_name, quantity, unit_price, total}" and model="Sonnet 4.5" and process_async="batch"

Scroll down and take another screenshot with extract_prompt="Extract payment 
details and due date" and model="Sonnet 4.5" and process_async="batch"
This strategy allows you to:
  • Use a thinking model for complex decisions in focused_action
  • Use a faster model for bulk extraction with extract_prompt
  • Optimize for both accuracy and cost

Automatic Fallbacks

Cyberdesk automatically handles model failures with a fallback chain:
  1. Primary model fails (rate limit, timeout, etc.)
  2. Fallback 1 is attempted (currently Vertex Sonnet 4.5 (Thinking))
  3. Fallback 2 is attempted if Fallback 1 also fails (currently Bedrock Sonnet 4.5 (Thinking))
This ensures your workflows remain resilient even during provider outages.

Custom Model Configurations

Want to use a specific model, provider, or configuration? The Cyberdesk team can set up custom model configurations for your organization.
Coming Soon: A self-service UI for creating custom model configurations is in development. In the meantime, contact the Cyberdesk team to request custom configurations.

What You Can Customize

  • Provider: Choose from any supported provider
  • Model: Select specific model versions
  • Temperature: Control response randomness
  • Max tokens: Set output length limits
  • Timeout: Configure request timeouts
  • API keys: Use your own provider API keys for billing and rate limits

Requesting a Custom Configuration

Contact the Cyberdesk team: Include details about:
  • Which provider and model you want to use
  • Any specific parameters (temperature, max tokens, etc.)
  • Whether you’ll provide your own API key

Supported Providers

Cyberdesk uses LangChain’s init_chat_model under the hood, which means we can support virtually any model from any provider. This includes:

Anthropic

Claude models including Sonnet, Opus, and Haiku variants

OpenAI

GPT-4, GPT-5, and other OpenAI models

AWS Bedrock

Access models through AWS infrastructure

Google

Gemini models via Vertex AI or Google AI

Azure

Azure OpenAI and Azure AI services

And More

Groq, Mistral, Cohere, Together, and others
For the full list of supported providers and their capabilities, see the LangChain integrations documentation.

Best Practices

Start with Defaults

System defaults are optimized for most use cases. Only customize if you have specific requirements.

Match Model to Task

Use computer-use models for focused_action, and consider faster/cheaper models for bulk extract_prompt operations.

Test Model Changes

When switching models, test your workflows thoroughly. Different models may behave differently on the same tasks.

Monitor Performance

Track run success rates after model changes. Some models may perform better on specific workflow types.

Quick Reference

Use CaseRecommended Approach
Main workflow agentSet at workflow level in dashboard
Dynamic decisions during navigationfocused_action with computer-use model
Vision-based data extractionextract_prompt (any vision model works)
Bulk extraction for outputextract_prompt with process_async and faster model
Cost optimizationOverride with cheaper model for extraction tasks

FAQ

Yes. Custom model configurations can use your own provider API keys. This gives you control over billing and rate limits. Contact the Cyberdesk team to set this up. Note that this will most likely result in a change to your Cyberdesk plan.
Cyberdesk monitors provider announcements and updates system defaults accordingly. For custom configurations, we’ll notify you in advance and help migrate to newer model versions.
Yes! Use the model="Model Name" parameter in your prompts to override the model for specific focused_action or extract_prompt operations. This is the recommended way to optimize for accuracy and cost.
focused_action works best with models that understand computer use—clicking, typing, and navigating. The model picker shows which models support computer use, and the editor warns if you choose a model that is not marked for it. Non-computer-use models are still fine for extract_prompt, which only needs vision/extraction capability.
Start with the system defaults. If you need more reasoning power for complex decisions, try a “Thinking” variant. For bulk extraction where speed matters, consider faster models like Sonnet without thinking. The model details panel in the picker shows each model’s characteristics.