← Back to Documentation

Performance Best Practices

Optimize your monitoring implementation for minimal overhead and maximum reliability

Minimize Monitoring Overhead

SEER monitoring should add minimal overhead to your scripts. Follow these guidelines:

  • API calls typically take 50-200ms - negligible for most workflows
  • Use async/background sending for real-time applications
  • Batch multiple updates when possible
  • Set appropriate timeouts to prevent hanging

Async API Calls

Non-Blocking Monitoring
Use threading to avoid blocking your main application
import threading
from seerpy import SEER

seer = SEER(api_key="your_api_key")

def send_to_seer(status, **kwargs):
    """Send monitoring data in background thread"""
    thread = threading.Thread(
        target=lambda: getattr(seer, status)(**kwargs)
    )
    thread.daemon = True
    thread.start()

# Usage - doesn't block main thread
try:
    results = process_data()
    send_to_seer('success', metadata={"count": len(results)})
    
except Exception as e:
    send_to_seer('error', error_message=str(e))

# Script continues immediately without waiting for API call

Batching Heartbeats

Efficient Heartbeat Strategy
Send heartbeats at appropriate intervals

Recommended Intervals

  • Short jobs (<5 min): No heartbeat needed, just report success/failure
  • Medium jobs (5-60 min): Send heartbeat every 5-10 minutes
  • Long jobs (>1 hour): Send heartbeat every 15-30 minutes
  • Continuous processes: Send heartbeat every 1-5 minutes

Implementation Example

import time
from seerpy import SEER

seer = SEER(api_key="your_api_key")
HEARTBEAT_INTERVAL = 300  # 5 minutes

last_heartbeat = time.time()

for item in large_dataset:
    # Process item
    process(item)
    
    # Send heartbeat if 5 minutes have passed
    if time.time() - last_heartbeat > HEARTBEAT_INTERVAL:
        seer.heartbeat("pipeline-id")
        last_heartbeat = time.time()

seer.success()

Timeout Configuration

Set Appropriate Timeouts
Prevent hanging on network issues
import requests

# Configure timeouts for API calls
CONNECT_TIMEOUT = 5  # seconds to establish connection
READ_TIMEOUT = 10    # seconds to receive response

try:
    response = requests.post(
        "https://api.seer.ansrstudio.com/monitoring",
        headers={"Authorization": api_key},
        json=payload,
        timeout=(CONNECT_TIMEOUT, READ_TIMEOUT)
    )
except requests.exceptions.Timeout:
    # Handle timeout gracefully
    print("SEER API timeout, saving locally")
    save_offline(payload)

Log Size Optimization

Keep Logs Manageable
Balance detail with performance

Strategies

  • Send full logs only on errors
  • Send summary logs on success
  • Truncate to last 500-1000 lines for large outputs
  • Use appropriate log levels (DEBUG in dev, INFO/ERROR in prod)
  • Compress logs before sending if very large

Smart Log Truncation

def smart_truncate_logs(logs, max_size=50000):
    """
    Keep beginning and end of logs, truncate middle
    Useful for large log files
    """
    if len(logs) <= max_size:
        return logs
    
    keep_size = max_size // 2
    return (
        logs[:keep_size] + 
        "\n\n... [TRUNCATED] ...\n\n" + 
        logs[-keep_size:]
    )

# Usage
seer.error(
    error_message=str(e),
    logs=smart_truncate_logs(all_logs)
)

Rate Limiting

Stay Within Limits

SEER has a rate limit of 100 requests per minute across all API endpoints. This is sufficient for most use cases.

Guidelines

  • Typical job monitoring uses 2-10 requests per run (start, heartbeats, end)
  • Don't send heartbeats more frequently than once per minute
  • Batch updates when monitoring multiple pipelines
  • Use exponential backoff if you receive 429 (rate limit) responses

Note: If you consistently hit rate limits, consider consolidating multiple small scripts into larger jobs or contact support for higher limits.

Performance Checklist
  • Set timeouts on all API calls (5-10 seconds recommended)
  • Use background threads for non-blocking monitoring
  • Send heartbeats at appropriate intervals for job duration
  • Truncate logs to reasonable sizes (50KB-1MB)
  • Implement offline storage for network failures
  • Respect rate limits (100 requests/minute)
  • Keep metadata concise and relevant