Once you monitor many domains, "run everything every minute" stops being practical. It burns resources, creates noisy spikes, and still doesn't align with different monitoring cadences.
solves this with a better pattern: run what's due, cap the batch size, repeat frequently.
Why this pattern works at scale
It gives you predictable load while still keeping checks timely:
- each domain keeps its own interval
- due domains are selected just-in-time
prevents any single sweep from exploding

That means one customer can monitor 100 domains with mixed cadences and still get controlled execution behavior.
How teams usually schedule it
The cleanest pattern is a frequent, lightweight scheduler that runs due checks in small batches:
- run every 1 to 5 minutes
- set
to a number that keeps execution predictable for your environment - rely on each monitor's own interval to determine which domains are actually due
This keeps the scheduler simple. All the cadence logic lives in monitor configuration, not in cron expressions scattered across environments.
Endpoint
Optional query parameter:
cURL example
What the response tells you
- how many runs executed
- what batch size was requested
- trigger source and execution timestamp
This is useful for cron logs, operator dashboards, and runbook verification.
Resilience tips (so scheduling doesn't become a failure source)
- Treat the scheduler as idempotent: it's okay if it runs twice; due selection should keep work bounded.
- Log the executed run count and alert if it drops to zero unexpectedly (it can signal auth issues or a stuck scheduler).
- After downtime, call this endpoint repeatedly until you're caught up instead of trying to run everything at once.
When teams usually call this endpoint
- On a scheduler/cron cadence.
- From admin UI "run due now" controls.
- During catch-up after downtime.
Keep it bounded
If domain monitoring reliability matters, batch-based due processing is the right architecture. This endpoint gives you that behavior directly.