Skip to main content

Rate Limits

All API endpoints are rate-limited using a sliding window algorithm. Limits are enforced per agent (for SDK endpoints) or per IP/user (for dashboard and auth endpoints).

Limits by Endpoint

Endpoint CategoryLimitWindowKey
SDK payment endpoints (/request, /execute, /approve, /confirm)60 requests1 minutePer agent
SDK read endpoints (/wallets, /transactions, /policies, /setup)120 requests1 minutePer agent
Auth endpoints (login, register)10 requests5 minutesPer IP
Auth endpoints (login, register)5 requests5 minutesPer account
Dashboard API endpoints100 requests1 minutePer user
Auth endpoints enforce two independent limits: per-IP and per-account. The per-account limit prevents credential stuffing attacks that rotate source IPs. Both limits must pass for a request to proceed.

Response Headers

Every API response includes rate limit headers:
HeaderDescription
X-RateLimit-RemainingRequests remaining in the current window
X-RateLimit-ResetISO 8601 timestamp when the window resets
Retry-AfterSeconds to wait (only present when rate limited)

Rate Limited Response

When a request is rate limited, the API returns HTTP 429:
{
  "error": "Rate limit exceeded. Please slow down your requests.",
  "code": "RATE_LIMITED",
  "retryAfter": 12
}

Retry Strategy

The SDK handles retries automatically for 429 and 5xx responses:
  • Up to 3 retries with exponential backoff
  • Respects Retry-After headers
  • Backoff: 1s, 2s, 4s (capped at 10s)
  • Client errors (4xx except 429) are not retried
For manual retry logic:
async function retryOnRateLimit(fn: () => Promise<Response>, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    const response = await fn();
    if (response.status === 429) {
      const retryAfter = parseInt(response.headers.get('Retry-After') || '5', 10);
      await new Promise(r => setTimeout(r, retryAfter * 1000));
      continue;
    }
    return response;
  }
  throw new Error('Rate limited after max retries');
}

Backend

Rate limiting uses Redis (Upstash) in production with a sliding window counter. In development without Redis, an in-memory fallback is used (not suitable for production multi-instance deployments). On Redis errors, the limiter fails closed (denies the request) to prevent abuse during outages.