Table of Contents
- Introduction:
- Understanding Heroku R14 and R15 Errors
- R14 – Memory Quota Exceeded
- R15 – Memory Quota Hard Limit Reached
- Do Heroku Webhooks Include R14 or R15 Events?
- How MetricFire Surfaces the Signals Behind R14 and R15 Errors
- Key Metrics to Monitor
- Recommended Approach
- Why This Works Better Than Waiting for R14 Errors
- How MetricFire Helps Teams Avoid R14/R15 Errors Entirely
- High-resolution memory monitoring
- Grafana dashboards with annotations
- Smart alerting
- Correlation with logs, CPU, throughput, latency
- Guidance on right-sizing dynos
- Conclusion: Stop Reacting to R14/R15 Errors, Just Prevent Them
How to Detect, Alert, and Resolve Memory Issues Before They Cause Downtime When applications scale on Heroku, memory-related issues are among the most common (and most frustrating... -_- ) sources of instability. Two of the most notorious culprits are the R14 (Memory Quota Exceeded) and R15 (Memory Quota Hard Limit) errors.
Introduction:
Recently, a customer asked us:
“Can we surface R14 or R15 errors as annotations in Grafana? And are these errors included in Heroku’s webhook events?”
This sparked a valuable conversation about how Heroku surfaces memory information and how MetricFire provides much deeper visibility than the raw Heroku platform can.
In this post, we’ll break down:
-
What R14 and R15 errors mean
-
Why teams struggle to catch them early
-
Whether Heroku exposes these errors through webhooks
-
How to monitor and alert on memory usage with MetricFire
-
How MetricFire can resolve R14/R15 issues using better observability
MetricFire's Hosted Graphite platform can analyze your system's performance and troubleshoot errors.
Book a demo with our team for more detailed information about MetricFire and how to integrate it with your system.
Sign up for a MetricFire free trial to get started with hosted Grafana dashboards.
Understanding Heroku R14 and R15 Errors
R14 – Memory Quota Exceeded
Heroku assigns each dyno a specific memory allowance. You trigger an R14 when your process exceeds that quota.
Symptoms:
-
Gradual performance degradation
-
Increased swap usage
-
Occasional request timeouts
R15 – Memory Quota Hard Limit Reached
R15 is the severe version of R14 — the dyno exceeds the absolute maximum, and Heroku forcibly kills the process.
Symptoms:
-
Immediate dyno crash
-
Lost in-flight requests
-
Application downtime
Because these errors often appear only in logs (not dashboards), many teams catch them after customers experience issues.
Do Heroku Webhooks Include R14 or R15 Events?
Short answer: No.
Heroku webhooks do not include detailed memory exception events like R14 or R15.
They trigger on:
-
Dyno lifecycle events
-
Deployment activity
-
Domain changes
-
Build events
But not memory violations.
This means teams relying solely on Heroku’s webhooks or application logs have a blind spot around early-warning memory signals.
And that’s exactly where MetricFire comes in.
How MetricFire Surfaces the Signals Behind R14 and R15 Errors
Use the web.*.memory.memory_rss and memory_total metrics in your Hosted Graphite (MetricFire) account to stay aware of resources above expected thresholds.
MetricFire pulls these metrics directly from Heroku’s runtime at high resolution to give real visibility into memory behavior before an R14 or R15 occurs.
Key Metrics to Monitor
web.*.memory.memory_rss: Shows real memory usage per dyno — the earliest signal of an upcoming R14
memory_total: Combines RSS + swap; closely tracks threshold violations
Recommended Approach
-
Create alerts when memory usage approaches 85–90% of your dyno’s quota
-
Set Grafana annotations to mark when thresholds cross
-
Visualize long-term memory growth to detect leaks or runaway processes
Because MetricFire integrates seamlessly with Grafana, you can annotate memory spikes, dyno restarts, and alert triggers directly in your dashboards.
If you share your dyno plan (Standard-1X, Standard-2X, Performance-M, etc.), MetricFire can help you set precise thresholds.
Why This Works Better Than Waiting for R14 Errors
Heroku only reports R14/R15 after the violation happens.
MetricFire lets you detect:
-
Memory creep
-
Leaks
-
Unbounded forks
-
Increasing RSS after deploys
-
Background jobs that exceed limits
This transforms R14/R15 from mysterious failures into predictable, preventable events.
How MetricFire Helps Teams Avoid R14/R15 Errors Entirely
MetricFire provides several advantages Heroku alone can’t:
High-resolution memory monitoring
Get second-level insight into RSS, swap, and total memory usage.
Grafana dashboards with annotations
Mark deploys, dyno restarts, memory spikes, and regressions visually.
Smart alerting
Alerts fire before you hit the Heroku limit — not after your app crashes.
Correlation with logs, CPU, throughput, latency
See how memory usage changes with:
-
traffic
-
code pushes
-
dyno changes
-
background jobs
Guidance on right-sizing dynos
MetricFire helps deterdine whether the issue is:
-
memory leak
-
insufficient dyno size
-
misbehaving worker
-
library/process consuming unexpected memory
Conclusion: Stop Reacting to R14/R15 Errors, Just Prevent Them
R14 and R15 errors don't need to be inevitable. With MetricFire, you can:
-
Proactively monitor your dyno memory usage
-
Alert before hitting dangerous thresholds
-
Visualize issues with Grafana annotations
-
Resolve the root cause instead of firefighting logs.