Back to Documentation

REST API

v1

Use our REST API to integrate SEER with any programming language or platform

Authentication
Include your API key in the auth header

Header Format:

Authorization: your-api-key

Example Request:

curl -X POST https://api.seer.ansrstudio.com/monitoring \
  -H "Authorization: df_your_api_key_here" \
  -H "Content-Type: application/json" \
  -d '{
    "job_name": "daily-etl",
    "status": "running",
    "run_id": null,
    "start_time": "2024-07-03T20:38:44.780607+00:00",
    "end_time": "2024-07-03T20:48:44.780607+00:00",
    "metadata": {"rows_processed": 10000},
    "error_details": "None",
    "tags": null,
    "logs": "ETL completed successfully"
  }'

Note: Get your API key from the SEER dashboard under Settings → API Keys

Available Endpoints

POST /monitoring
Send job status updates and execution details
Job Status

Note: The monitoring endpoint has two different behaviors depending on the status:

  • Start Run (status="running"): Don't include run_id, it will be generated
  • End Run (status="success"/"failed"/"cancelled"): Include the run_id from start

Start Run
Starting a new job run (status="running")

Request Body:
{
  "job_name": "daily-etl",           // Required: Unique job identifier
  "status": "running",               // Required: Must be "running" for start
  "run_id": "",                      // Required: Empty string for start run
  "start_time": "2024-07-03T20:38:44.780607+00:00",  // Required: UTC timestamp
  "end_time": null,                  // Required: null for start run
  "metadata": {                      // Optional: Custom metadata
    "source": "postgres",
    "target": "s3",
    "config": "production"
  },
  "error_details": null,             // Optional: null for start run
  "tags": null,                      // Optional: Tags for categorization
  "logs": "Starting ETL process"     // Optional: Initial logs
}
Response:
{
  "run_id": "run-1234567890"  // Save this for the end run request
}

End Run
Completing a job run (status="success"/"failed"/"cancelled")

Request Body:
{
  "job_name": "daily-etl",           // Required: Same job identifier
  "status": "success",               // Required: "success" | "failed" | "cancelled"
  "run_id": "run-1234567890",        // Required: run_id from start run response
  "start_time": "2024-07-03T20:38:44.780607+00:00",  // Required: UTC timestamp
  "end_time": "2024-07-03T20:48:44.780607+00:00",    // Required: UTC timestamp
  "metadata": {                      // Optional: Final metadata
    "rows_processed": 10000,
    "duration_ms": 600000
  },
  "error_details": "None",           // Optional: Error message if failed
  "tags": null,                      // Optional: Tags for categorization
  "logs": "ETL completed successfully"  // Optional: Final logs
}
Response:
{
  "update_status": "success"  // or "Failed" if update failed
}

Complete Example:

import requests
from datetime import datetime, timezone
import json

def monitor_job():
    url = "https://api.seer.ansrstudio.com/monitoring"
    headers = {
        "Authorization": "df_your_api_key_here",
        "Content-Type": "application/json"
    }
    
    # Step 1: Start the run
    start_time = datetime.now(timezone.utc).isoformat()
    start_payload = {
        "job_name": "daily-etl",
        "status": "running",
        "run_id": "",  # Empty string for start run
        "start_time": start_time,
        "end_time": None,  # null for start run
        "metadata": {"source": "postgres"},
        "error_details": None,
        "tags": None,
        "logs": "Starting ETL"
    }
    
    start_response = requests.post(url, headers=headers, json=start_payload)
    run_id = json.loads(start_response.json()).get("run_id")
    print(f"Started run: {run_id}")
    
    # Step 2: Do your work
    try:
        process_data()  # Your actual work here
        final_status = "success"
        error_details = "None"
    except Exception as e:
        final_status = "failed"
        error_details = str(e)
    
    # Step 3: End the run
    end_payload = {
        "job_name": "daily-etl",
        "status": final_status,
        "run_id": run_id,  # Include the run_id from step 1
        "start_time": start_time,
        "end_time": datetime.now(timezone.utc).isoformat(),
        "metadata": {"rows_processed": 10000},
        "error_details": error_details,
        "tags": None,
        "logs": f"ETL {final_status}"
    }
    
    end_response = requests.post(url, headers=headers, json=end_payload)
    end_json = json.loads(end_response.json()).get("update_status")
    print(f"Update status: {end_json}")

monitor_job()

Important: Always use UTC timestamps (ISO 8601 format with timezone). The start_time must be included in both start and end requests for consistency.

POST /heartbeat
Send periodic heartbeat signals to indicate job health
Heartbeat

Request Body:

{
  "job_name": "daily-etl",           // Required: Job identifier
  "current_time": "2024-01-15T10:00:00Z",  // Required: ISO 8601 format
  "metadata": {                      // Optional: Additional context
    "status": "healthy",
    "last_run": "2024-01-15T09:00:00Z",
    "progress": "50%"
  }
}

Response:

{
  "success": true,
  "message": "Heartbeat received",
  "next_expected": "2024-01-15T10:05:00Z"
}

Examples:

curl -X POST https://api.seer.ansrstudio.com/heartbeat \
  -H "Authorization: df_your_api_key_here" \
  -H "Content-Type: application/json" \
  -d '{
    "job_name": "daily-etl",
    "current_time": "2024-01-15T10:00:00Z",
    "metadata": {
      "status": "healthy",
      "progress": "50%"
    }
  }'

Best Practice: Send heartbeats at regular intervals (e.g., every 5 minutes) during long-running jobs to indicate the job is still active and healthy.

Error Handling

HTTP Status Codes
200

Success

Request processed successfully

401

Unauthorized

Invalid or missing API key

429

Rate Limit Exceeded

Too many requests, implement exponential backoff

500

Server Error

Internal server error, retry with exponential backoff

Retry Logic Example (Python):

import requests
import time

def send_with_retry(url, payload, max_retries=3):
    headers = {
        "Authorization": "df_your_api_key_here",
        "Content-Type": "application/json"
    }
    
    for attempt in range(max_retries):
        try:
            response = requests.post(url, headers=headers, json=payload, timeout=30)
            
            if response.status_code == 200:
                return response.json()
            elif response.status_code == 429:
                # Rate limited, wait and retry
                wait_time = 2 ** attempt  # Exponential backoff: 1s, 2s, 4s
                print(f"Rate limited, waiting {wait_time}s...")
                time.sleep(wait_time)
            elif response.status_code >= 500:
                # Server error, retry
                wait_time = 2 ** attempt
                print(f"Server error, retrying in {wait_time}s...")
                time.sleep(wait_time)
            else:
                # Client error, don't retry
                print(f"Error {response.status_code}: {response.text}")
                return None
                
        except requests.exceptions.Timeout:
            print(f"Request timeout, attempt {attempt + 1}/{max_retries}")
            if attempt < max_retries - 1:
                time.sleep(2 ** attempt)
        except requests.exceptions.RequestException as e:
            print(f"Request failed: {e}")
            return None
    
    print("Max retries exceeded")
    return None

Rate Limits

Free Plan

  • • 100 requests per hour
  • • Unlimited requests per month
  • • 3 active pipelines

Pro Plan

  • • 100 requests per hour
  • • Unlimited monthly requests
  • • Unlimited pipelines

Tip: Implement exponential backoff and request batching to stay within rate limits. Monitor your usage in the dashboard under Settings → Usage.