Webhook Alert Notifications

The difference between "we had monitoring" and "we handled this well" is often routing.

Webhook notifications are a strong default for domain monitoring because they let you integrate with the tooling your team already trusts: incident management, alert aggregation, on-call, and internal dashboards.

If critical DNS failures go to the wrong channel, the signal gets lost. If low-priority alerts go everywhere, teams tune out. Routing is where monitoring either becomes reliable or becomes noise.

MailSlurp webhook events showing delivery status and response codes

gives you structured routing with severity thresholds so alert flow matches operational reality.

What this solves (in plain terms)

Domain monitoring tends to catch failures that are easy to miss until they hurt deliverability:

  • SPF regressions (missing includes, record too long, accidental deletion)
  • DMARC changes (policy weakened, reporting disabled, alignment issues)
  • MX instability (records removed, misconfigured priorities, intermittent resolution)

You want different handling for each severity level:

  • : someone wakes up
  • : someone investigates quickly during working hours
  • : someone sees it in a daily summary

Webhooks are the easiest way to express that. You can fan out, dedupe, and route to multiple tools without hard-coding integrations into your monitoring service.

Webhook vs chat vs email

Use the channel that matches how your team actually reacts:

  • Webhook: best for PagerDuty/Opsgenie, incident pipelines, custom dashboards, and deduping.
  • Slack/Teams: best for visibility and triage, worse for guaranteed response.
  • Email: best for audit trails and low-volume notifications, worst for urgent response.

A common pattern is webhook for , chat for and , and optional email for summaries.

Endpoint

Request example

cURL example

Payload design tips for reliable alerting

Even if your webhook destination is a third-party tool, treat your alert pipeline like an API you maintain:

  • Include a stable dedupe key (for example + finding type + time bucket) to avoid alert storms.
  • Include enough context to act without clicking: domain, severity, top finding, and suggested next step.
  • Make retries safe: webhook targets should accept the same event multiple times without creating duplicates.

If you're building your own receiver, store events and drive downstream actions from a queue. Don't make the monitoring request thread wait on a cascade of outgoing calls.

Webhook receiver quickstart (Node.js)

This is the simplest shape for a receiver: accept JSON, validate an HMAC signature, enqueue the event.

If you don't control signing headers yet, you can still start with allow-listing by IP, rate limiting, and using an unguessable URL path.

Testing routing (without waiting for a real incident)

Avoid "we'll know if it works when it breaks" alerting:

  1. Create a webhook sink with pointed at a staging receiver.
  2. Trigger a manual run () on a monitor and confirm a delivery path end-to-end.
  3. Promote the sink to production receiver and raise after validation.

If your receiver powers multiple alerts, build a small "test event" endpoint so teams can verify on-call routing during runbook drills.

A routing model that works

  1. -> pager/webhook/on-call system.
  2. and -> team chat channels.
  3. -> optional summary channels.

If you only do one thing

Alert routing should be deliberate, not accidental. This endpoint gives you a clean way to codify that strategy and make webhook notifications part of your domain monitoring posture.