OpenClaw API
Use the OpenClaw API to integrate agent operations into your workflows, automate tasks, and build custom tooling.
- ›RESTful API for agent lifecycle management.
- ›Webhook support for event-driven workflows.
- ›Official SDKs for Python and JavaScript.
Get Started
Start integrating
Get your API key and explore the full reference documentation.
API overview
The OpenClaw API provides programmatic access to all platform capabilities, enabling integration with existing tools and automation of repetitive workflows. Every operation available through the dashboard is exposed through a consistent REST interface that follows modern API conventions. Authentication uses bearer tokens issued per workspace, with granular scopes that limit access to specific resource types. Rate limits are generous for standard plans, with burst capacity available on higher tiers to accommodate spikey integration workloads.
The base URL for all API requests is api.clawmesh.com/v1. All requests and responses use JSON encoding, and HTTPS is required for every call. The API is stateless, so every request must include the necessary authentication headers regardless of context. Errors follow a predictable structure with machine-readable codes and human-readable messages to simplify debugging in CI pipelines and monitoring dashboards.
API design prioritizes predictability over flexibility. Resource URLs are flat and hierarchical, HTTP methods carry semantic meaning, and response envelopes are consistent across all endpoints. Idempotency keys are supported on write operations, allowing safe retries without creating duplicate records. Pagination uses cursor-based navigation to handle large result sets without performance degradation.
Authentication
Every API request requires a valid bearer token in the Authorization header. Tokens are generated from the dashboard under Settings > API Keys, where you can create keys with specific scopes such as agents:read, agents:write, tasks:read, or webhooks:manage. Treat tokens like passwords: rotate them regularly, store them in environment variables, and never embed them in source code repositories.
Scope-based access control follows the principle of least privilege. A monitoring script that only reads agent status needs only the agents:read scope. A CI pipeline that creates tasks for agents needs tasks:read and tasks:write. Keeping scopes narrow limits the blast radius of a compromised key. If a key is exposed, revoke it immediately from the dashboard and generate a replacement.
Token expiration is configurable at creation time. Short-lived tokens suit automated pipelines that run frequently, while long-lived tokens are practical for internal dashboards that operate continuously. Regardless of expiration settings, all tokens can be revoked manually at any time from the same dashboard panel.
Core endpoints
The agents endpoint group provides full lifecycle management for individual agents and agent fleets. List all agents in a workspace with GET /agents, filtering by status, label, or creation date. Create a new agent with POST /agents, passing a configuration object that specifies the model provider, skills to activate, and execution constraints. Read a single agent with GET /agents/{id}, update its configuration with PATCH /agents/{id}, and remove it with DELETE /agents/{id}.
Task endpoints handle work item submission and status tracking. Submit a new task with POST /tasks, providing instructions and attaching any necessary context files. Poll task status with GET /tasks/{id}, which returns state transitions, output artifacts, and timing metrics. Tasks can be cancelled while pending or running with POST /tasks/{id}/cancel. Task history is retained for 30 days on standard plans, giving you sufficient window for audit trails and debugging.
File management endpoints (GET/POST /files, GET /files/{id}, DELETE /files/{id}) handle artifact storage for inputs and outputs that exceed the size limits of direct task payloads. Files up to 100 MB are supported, with automatic virus scanning for uploads. File references can be passed directly to task submissions, eliminating the need for Base64 encoding of large payloads.
The webhooks API allows external systems to receive real-time notifications when agent events occur. Register a webhook endpoint with POST /webhooks, specifying the event types to subscribe to (agent.created, task.completed, task.failed, task.output_received). Each delivery includes a signature header that can be verified against your webhook secret to confirm authenticity. Failed deliveries are retried with exponential backoff for up to 24 hours.
SDKs and client libraries
Official SDKs are available for Python and JavaScript/TypeScript, wrapping the REST API with ergonomic language-specific interfaces. The Python SDK (pip install clawmesh-sdk) provides synchronous and async client classes, automatic pagination, and type hints that integrate with IDE tooling. The JavaScript SDK (@clawmesh/sdk) works in Node.js and browser environments, supporting tree-shaking for minimal bundle sizes.
Both SDKs handle authentication through environment variables or explicit configuration, manage automatic retries with configurable backoff, and parse API responses into typed objects rather than raw dictionaries. SDK source code is available on GitHub and follows the same contribution guidelines as the core platform. Community-maintained clients for Go, Ruby, and Rust are listed in the API documentation.
When SDK coverage lags behind the API, use the REST endpoints directly with standard HTTP libraries. The JSON response structure is stable and versioned, so direct HTTP calls are a practical fallback for any language without an official SDK. The OpenAPI specification document is downloadable from the dashboard and can be used to generate type-safe clients in any language supported by OpenAPI tooling.
Webhooks
Webhooks deliver real-time event notifications to your endpoints, enabling reactive integrations that do not require polling. Subscribe to events that matter to your workflow: agent state changes, task completions, skill activation logs, or error conditions. Each webhook delivery includes a JSON payload describing the event, a timestamp, and a signature for verification.
Verification uses HMAC-SHA256. The signature header X-Clawmesh-Signature contains the computed HMAC of the raw request body using your webhook secret. Always verify this signature before processing the payload to prevent spoofed requests. Reject any delivery where the signature does not match, and log the event for security analysis.
Webhook delivery failures trigger automatic retries with exponential backoff: first attempt at 5 seconds, then 30 seconds, then 1 minute, then 5 minutes, then 30 minutes, and finally after 2 hours. After 6 failed attempts, the webhook is marked as failed and no further retries occur. Monitor the webhook delivery dashboard in Settings to identify persistent failures and update your endpoint availability.
Integration patterns
Fleet monitoring integrates with existing observability stacks by querying the agents endpoint on a schedule and forwarding status to Prometheus, Datadog, or Grafana. The agents:read scope is sufficient for read-only monitoring. Aggregate metrics such as active agent count, task throughput, and error rates provide early warning of degraded fleet health.
CI/CD integration uses POST /tasks to submit build or test tasks to agents, then webhooks or polling to collect results. This pattern is effective for code analysis, automated review workflows, and documentation generation pipelines that run on every pull request. The tasks:write scope is needed to submit work, and tasks:read is needed to collect results.
Internal tooling can embed agent capabilities directly into existing applications using the API. A customer support dashboard can route inquiries to specialized agents via POST /tasks. A data platform can schedule recurring processing tasks that transform and move data between systems. The API is intentionally general-purpose to support a wide range of internal use cases without requiring platform-specific adaptations.
Getting started
Begin by generating an API key from the dashboard. Start with the most restrictive scope that meets your needs. Use the sandbox environment (api.clawmesh.com/v1/sandbox) to test integration logic without affecting production agents or consuming plan resources. Sandbox endpoints mirror production behavior but operate on isolated data.
Review the API reference documentation for the full endpoint catalog, parameter details, and response schemas. Code examples are available in Python, JavaScript, and curl. Each endpoint page includes a live tester that lets you send requests directly from the browser after authenticating with your dashboard session.
When building production integrations, implement proper error handling for network failures, HTTP 429 rate limit responses, and HTTP 5xx server errors. Use exponential backoff with jitter when retrying failed requests. Set up alerting on webhook delivery failures so that integration gaps are detected within minutes rather than hours.
Related guides
Q&A
How do I get an API key?
API keys are available in the dashboard under Settings > API Keys. Generate a key with the appropriate scope for your use case.
What are the rate limits?
Standard plans allow 1,000 requests per minute per workspace. Burst capacity of 3,000 requests per minute is available for short periods. Rate limit headers are returned on every response so your code can throttle proactively.
Can I use the API without an official SDK?
Yes. All API endpoints are accessible via standard HTTP libraries in any language. Download the OpenAPI spec from the dashboard to generate type-safe clients for your language of choice.
How do I verify webhook authenticity?
Every webhook delivery includes an X-Clawmesh-Signature header. Compute the HMAC-SHA256 of the raw body using your webhook secret and compare it to the header value. Reject any request where the signature does not match.