A Simple Cron Job Monitor & Heartbeat Alert System
We generate a unique ID. Your script pings it. We alert you if it doesn't.
Hype-free, technical, and direct dead man's switch. Built for developers who hate false positives and bloated observability tools.
How it works
1. PingPug generates a unique HTTP endpoint: https://pingpug.xyz/api/ping/YOUR_UNIQUE_ID.
2. You append a lightweight curl or fetch request to the very end of your background script.
3. If our system doesn't receive that ping before your scheduled deadline, we trigger an immediate escalation alert.
Dead Man's Switch Setup: 3 Steps to Bulletproof Cron Jobs
1.Create a Ping URL
Generate a unique, secure endpoint. Define your expected interval (e.g., every 24 hours) and a grace period.
2.Append to your Code
Add a simple HTTP GET request to the end of your bash script, Node.js task, or Python pipeline.
3.Get Alerted
If our servers do not receive a heartbeat ping from your script in time, we instantly trigger an Email, Discord, and Telegram alert.
Works With Any Language: The Ultimate Server Monitoring Alternative
No heavy SDKs or dependencies required. Just standard HTTP requests.
cURL (Bash)
curl -m 10 --retry 3 https://pingpug.xyz/api/ping/YOUR_UNIQUE_IDPython
requests.get('https://pingpug.xyz/api/ping/YOUR_UNIQUE_ID', timeout=10)Node.js
await fetch('https://pingpug.xyz/api/ping/YOUR_UNIQUE_ID', { signal: AbortSignal.timeout(10000) });Go
client := http.Client{Timeout: 10 * time.Second}; client.Get("https://pingpug.xyz/api/ping/YOUR_UNIQUE_ID")Ruby
require 'net/http'; require 'uri'; Net::HTTP.get(URI('https://pingpug.xyz/api/ping/YOUR_UNIQUE_ID'))PHP
file_get_contents('https://pingpug.xyz/api/ping/YOUR_UNIQUE_ID', false, stream_context_create(['http' => ['timeout' => 10]]));Java
HttpClient.newBuilder().connectTimeout(Duration.ofSeconds(10)).build().send(HttpRequest.newBuilder().uri(URI.create("https://pingpug.xyz/api/ping/YOUR_UNIQUE_ID")).build(), HttpResponse.BodyHandlers.discarding());The Anatomy of a Silent Failure: Why Traditional Server Monitoring Alternatives Miss the Mark
Your database backup script fires up at 11:00 PM on a Friday. Halfway through the pg_dump execution, your server’s /tmp directory runs out of disk space. The script aborts, leaving behind a corrupted, 0-byte SQL file. But the server itself hasn't crashed. You log off for the weekend, completely unaware. It isn't until Tuesday morning, when a bad deployment forces you to restore from a backup, that you realize your safety net has been gone for days.
This is the nightmare of the silent failure.
Traditional uptime monitors (like UptimeRobot or Pingdom) are designed to check if your front door is open. They send a synthetic HTTP GET request to your website every few minutes. As long as your Nginx or Node.js server replies with a 200 OK status, your dashboard glows green and reports 100% uptime. For basic server monitoring, this is fine. It guarantees that your load balancer is routing traffic and your main web process is alive. But modern applications are far more complex than just a web proxy serving HTTP responses.
Traditional pinging is entirely blind to the invisible, asynchronous background tasks that actually run your business. Your website can be perfectly healthy, serving user traffic without skipping a beat, while your backend infrastructure is secretly on fire due to:
- OOM (Out of Memory) Kills: Your server's RAM spikes while a cron job parses a massive CSV file. The Linux OOM killer silently terminates the background worker to protect the rest of the system. The web server stays up, so standard uptime checks still pass, but your daily data pipeline is effectively dead. Even complex observability tools often overlook this if the primary application container hasn't stopped responding to HTTP health checks.
- Silent API Rate Limits: Your nightly data scraper hits a strict rate limit from a third-party provider like Stripe or AWS. It receives a
429 Too Many Requestsresponse. The script aborts early to respect the rate limit, but does so without throwing a system-wide, fatal exception. This means no critical error logs are pushed to your main alerting channels, like Sentry or Slack. - Full Disks and Expired Tokens: Unrotated log files quietly consume your remaining disk space, or an OAuth refresh token used by your email queue silently expires. The scheduled cron job attempts to spin up, instantly hits a fatal permission or IO error, and exits before executing any of its actual business logic.
- Stale Cache and Orphaned Processes: A poorly configured worker node gets caught in an infinite retry loop, or a database locking issue causes a queue processor to hang indefinitely. It quietly eats CPU cycles without ever completing its job. Unless you have a dedicated cron heartbeat strategy, you will remain blissfully unaware of the bottleneck until angry customers complain about missing reports or unsent notifications.
When background scripts fail, they don't take the rest of your architecture down with them. They fail in the dark. They don't generate the massive 500 error spikes that immediately trigger PagerDuty alarms. Instead, they leave behind insidious problems: stale analytics, unsent onboarding emails, expired SSL certificates, and unbilled subscription renewals. You don't necessarily need heavier, wildly expensive observability tools featuring complex distributed tracing just to catch this. You simply need a reliable way to verify that a specific code path actually reached its logical conclusion.
This is where PingPug flips the traditional monitoring model upside down using the concept of a dead man's switch. Instead of constantly pinging your server from the outside and guessing if things are operating smoothly, PingPug waits patiently for your code to actively report in.
By appending a simple HTTP request to the end of your scripts, you create a verified cron heartbeat. If your software doesn't explicitly call home to say "I successfully finished the job" within your expected, pre-defined timeframe, we instantly trigger a high-priority Email, Discord, and Telegram alert to wake you up.
Stop trusting 200 OK. Start monitoring what actually matters.
Frequently Asked Questions
How does PingPug generation work?
PingPug generates a cryptographically secure unique ID for each of your monitors. You append this ID to our API endpoint (https://pingpug.xyz/api/ping/YOUR_ID) and make a standard HTTP GET request at the end of your script.
What happens if a cron job fails silently?
If PingPug doesn't receive a ping at your unique URL within the expected timeframe and grace period, it instantly triggers an alert to your Email, Discord, or Telegram.
Why I Built PingPug: Rethinking Cron Job Alerting for Developers
Hi, I'm Denis, the developer behind PingPug.
A few months ago, I was running a SaaS project and everything looked perfect. My server was up, the landing page was fast, and my standard uptime monitors were glowing green. Then, I checked my Stripe dashboard.
A background cron job responsible for syncing subscription renewals had crashed days earlier due to an unhandled third-party API timeout. Because the main web application was still serving 200 OK responses, none of my traditional monitoring tools caught it. My code was failing silently, and I didn't find out until a customer actually emailed me to ask why their account hadn't renewed.
"I realized that monitoring servers isn't enough; you have to monitor the invisible background tasks that actually run the business."
I needed a simple "dead man's switch" to alert me if my scripts didn't finish. But everything on the market was either a bloated enterprise observability platform, or required installing heavy, language-specific SDKs.
So, I built PingPug. It does one thing perfectly: it listens for a heartbeat from your code, and screams if it goes quiet. No bloat, no complex dashboards. Just absolute peace of mind for solo developers and indie hackers.