HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1705334400
{
"status": "success",
"message": "Job status updated"
}Maximum requests per minute
Requests remaining in current window
Unix timestamp when limit resets
The seerpy SDK automatically handles rate limits with exponential backoff:
from seerpy import Seer
seer = Seer(apiKey='YOUR_API_KEY')
# SDK automatically retries on 429 (Too Many Requests)
with seer.monitor("my-job"):
# Your code here
pass
# Retry schedule:
# - 1st retry: 1 second
# - 2nd retry: 2 seconds
# - 3rd retry: 4 secondsGroup multiple operations into a single monitored block:
# ❌ Bad: Multiple API calls
for item in items:
with seer.monitor(f"process-{item}"):
process(item)
# ✅ Good: Single API call for batch
with seer.monitor("process-batch",
metadata={"count": len(items)}):
for item in items:
process(item)Send heartbeats at reasonable intervals:
import time
# ❌ Bad: Too frequent
for i in range(1000):
seer.heartbeat("my-job") # 1000 API calls!
process_item(i)
# ✅ Good: Reasonable intervals
for i in range(1000):
if i % 100 == 0: # Every 100 items
seer.heartbeat("my-job",
metadata={"progress": f"{i}/1000"})
process_item(i)HTTP/1.1 429 Too Many Requests
Retry-After: 60
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1705334460
{
"error": "Rate limit exceeded",
"message": "Too many requests. Please retry after 60 seconds.",
"retry_after": 60
}Dashboard → Usage
View real-time API usage and remaining quota
Usage Alerts
Get notified at 80% and 95% of your quota
Historical Data
View usage trends over time
Batch operations when possible
Reduce API calls by grouping work
Use heartbeats strategically
Send at reasonable intervals (every 5-10 minutes)
Monitor your usage
Check dashboard regularly to avoid surprises
Let the SDK handle retries
Built-in exponential backoff handles rate limits