Rate Limits
All API endpoints are rate-limited using a sliding window algorithm. Limits are enforced per agent (for SDK endpoints) or per IP/user (for dashboard and auth endpoints).
Limits by Endpoint
| Endpoint Category | Limit | Window | Key |
|---|
SDK payment endpoints (/request, /execute, /approve, /confirm) | 60 requests | 1 minute | Per agent |
SDK read endpoints (/wallets, /transactions, /policies, /setup) | 120 requests | 1 minute | Per agent |
| Auth endpoints (login, register) | 10 requests | 5 minutes | Per IP |
| Auth endpoints (login, register) | 5 requests | 5 minutes | Per account |
| Dashboard API endpoints | 100 requests | 1 minute | Per user |
Auth endpoints enforce two independent limits: per-IP and per-account. The per-account limit prevents credential stuffing attacks that rotate source IPs. Both limits must pass for a request to proceed.
Every API response includes rate limit headers:
| Header | Description |
|---|
X-RateLimit-Remaining | Requests remaining in the current window |
X-RateLimit-Reset | ISO 8601 timestamp when the window resets |
Retry-After | Seconds to wait (only present when rate limited) |
Rate Limited Response
When a request is rate limited, the API returns HTTP 429:
{
"error": "Rate limit exceeded. Please slow down your requests.",
"code": "RATE_LIMITED",
"retryAfter": 12
}
Retry Strategy
The SDK handles retries automatically for 429 and 5xx responses:
- Up to 3 retries with exponential backoff
- Respects
Retry-After headers
- Backoff: 1s, 2s, 4s (capped at 10s)
- Client errors (
4xx except 429) are not retried
For manual retry logic:
async function retryOnRateLimit(fn: () => Promise<Response>, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const response = await fn();
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '5', 10);
await new Promise(r => setTimeout(r, retryAfter * 1000));
continue;
}
return response;
}
throw new Error('Rate limited after max retries');
}
Backend
Rate limiting uses Redis (Upstash) in production with a sliding window counter. In development without Redis, an in-memory fallback is used (not suitable for production multi-instance deployments).
On Redis errors, the limiter fails closed (denies the request) to prevent abuse during outages.