Docs
Ctrl+K Search Alt+[Alt+] Guides
Get API key

Guides

Rate limits

The programmatic API enforces a simple per-account rate limit on top of the monthly credit quota. This page covers the limit, how it's measured, and how to back off cleanly.


The limit

Setting Value
Window Rolling 60 seconds
Max requests 60 per billing account (rolling window)
Counted Recent successful metered calls, 429s, and 500s after auth/plan passed (rolling window on the server). 401 / 403 do not count toward this cap.
Bron Product default: 60 requests per minute per account.

Effectively: one request per second on average, with bursts allowed up to 60 inside any 60-second window.

When you cross the limit, the next request returns:

HTTP/1.1 429 Too Many Requests
Content-Type: application/json

{
  "success": false,
  "error": "Rate limit exceeded. Max 60 requests per minute."
}

The 429 itself is logged, which is fine — it ages out of the 60-second window like any other request.


How the window works

The limiter is a rolling window, not a fixed bucket:

  1. On each request, the server counts how many of your requests landed in the last 60 seconds.
  2. If that count is already ≥ 60, the request is rejected with 429.
  3. Otherwise it proceeds, gets logged, and contributes to future windows.

So a burst of 60 requests at t=0 clears at t=60, not at the top of the next minute. There is no Retry-After header — wait roughly 1 second and try again, or back off as below.


Backoff strategy

A pragmatic loop:

async function withRetry(fn, { maxAttempts = 6, baseMs = 1000 } = {}) {
  for (let attempt = 1; ; attempt++) {
    try {
      return await fn();
    } catch (err) {
      const isRateLimited =
        /Rate limit exceeded/.test(err.message);
      const isUpstream5xx =
        /Upstream error/.test(err.message);

      if (!isRateLimited && !isUpstream5xx) throw err;
      if (attempt >= maxAttempts) throw err;

      const delay = baseMs * Math.pow(2, attempt - 1) + Math.random() * 250;
      await new Promise((r) => setTimeout(r, delay));
    }
  }
}

Notes:

  • Use exponential backoff with jitter. A flat setTimeout(1000) on every retry will collide with itself if you're parallel.
  • Don't retry on 403, 404, 401, or the credit-exhaustion 429. Those won't recover within a minute.
  • Cap attempts. Six retries with baseMs=1000 already covers ~1 minute of backoff — if you're still rate-limited after that, you're sustaining over 60/min and need fewer concurrent workers, not more retries.

Designing within the limit

Workload Concurrency to stay safe
Quick scripts / one-off backfills 1 worker, no special pacing. You will hit the quota long before the rate limit.
Steady ETL 1 worker, sleep ~1s between requests. Predictable, never bursts.
Burst-heavy crawlers 2–4 workers with a token bucket sized at 60/min total across all workers. Centralize the bucket.
Live dashboards / per-user fetches Cache at the edge; never proxy more than 60/min/user upstream.

The limit is per user, not per IP or per key — generating multiple keys does not raise it.

Rate limit vs credits

Concern Limit Opnieuw instellen
Burst 60 / 60s rolling Continuous
Volume 20,000 / calendar month First metered request in the new month (server Y-m)

In practice, sustained workloads run into the credit quota long before the rate limit. The rate limit exists mostly to stop runaway loops.


Determining the cause of a 429 error

Both the per-minute rate limit and the monthly credit limit return a 429 status code. You can distinguish them by inspecting the response body:

if (res.status === 429) {
  const body = await res.json();
  if (body && body.credits) {
    // Monthly credit limit reached.
    // Do not retry until next month or you upgrade your plan.
  } else {
    // Per-minute rate limit reached.
    // Implement a backoff strategy and retry.
  }
}

See Errors for the full status-code map.


Common mistakes

  • Tight loops without sleep. A for loop without await between requests will burn through 60 calls in milliseconds and stall.
  • Parallelizing without a shared limiter. Two workers each doing 40/min will trigger the limit even though each looks "safe".
  • Retrying 403 / 404 as if they were rate limits. Those mean "access denied" or "wrong path"; they will not succeed without changing the request or account.
  • Polling /credits to detect rate limiting. That endpoint counts toward the rate limit (and costs a credit). Look at the actual 429 instead.