Ops Briefing Surface

Production Reliability Dashboard

Generated 2026-04-11 13:31 for 2026-04-04 07:00 to 2026-04-11 07:00 from Pingdom checks, Slack #_alerts_prod, and AWS SNS alerts.

All sources Pingdom customer checks Slack alert families AWS alarm emails
Email-confirmed customer incidents0Pingdom down/slow events confirmed by inbox alertsObserved in Pingdom: 0
Impacted services5Mapped from Slack and Pingdom evidence
AWS alarms in ALARM0Still alarming at window end
Latest observed signal2026-04-11 03:30Most recent cross-source activity
Executive summary

What needs attention

Bottom line: application-level critical paths are present.

Pingdom customer impact

External signal
No criticalActive: 0Total seen: 0

No Pingdom checks were available in this window.

No active issue listed in this category.

Slack impacted services

Application signal
1 criticalActive: 5Total seen: 5

1 critical, 4 non-critical active item(s).

AWS alarms

Infrastructure signal
No criticalActive: 0Total seen: 0

No AWS alarm emails were captured in this window.

No active issue listed in this category.

What to do next

  1. NextTreat critical services core-grafana-80 as the primary application investigation set

    Treat critical services core-grafana-80 as the primary application investigation set. Use TraefikServiceHighErrorRate as the leading category, reproduce the failing paths on core-grafana-80, compare against the latest deploy or config change, and do not close it until both error rate and latency flatten.

    Executive recommendation
  2. ThenReduce the operational drag on update-recurenta by separating repeated symptom alerts from the underlying workload failure

    Reduce the operational drag on update-recurenta by separating repeated symptom alerts from the underlying workload failure. Either eliminate the recurrent fault or retune the alert once the failure mode is understood.

    Executive recommendation
Global Evidence Explorer

Global Evidence Explorer

Report-wide charts and tables stay here, separate from the active investigation scope.

Application + Infrastructure Alerts by Day

Pingdom latency + downtime by source

Customer View

Pingdom Checks

Pingdom CheckStatusEventsDowntimeLast SeenLikely ServicesCorrelated Evidence

Pingdom rows show externally visible signal first. The correlated evidence column helps tie the failing check back to services, Slack alert families, or AWS alarms when those links exist.

Application View

Slack Impacted Service / Resource View

This view attributes alerts to the workload or resource named in the alert text. Grafana, Loki, and Tempo are treated as observability components and are excluded when a more specific impacted target is also present.

Impacted Service / ResourceHighest SeverityCountLast SeenStatusTop Alert TypesDiscussion SignalLatest Thread Note
core-grafana-80Critical22026-04-06 00:44Seen this weekTraefikServiceHighErrorRate (2)NoneNo thread note
update-recurenta
Grouped 7 variantsVariant mentions 56Active variants 7
Warning562026-04-11 03:30Seen todayKubeJobFailed (44)KubeJobNotCompleted (12)NoneNo thread note
ai-apiWarning32026-04-10 17:25Seen todayTraefikServiceHighLatency (3)NoneNo thread note
metrics-serverWarning152026-04-06 17:34Seen this weekKubeAggregatedAPIDown (15)NoneNo thread note
grafanaWarning62026-04-08 02:53Seen this weekKubeHpaMaxedOut (6)NoneNo thread note
Evidence

Slack Alert Families

AlertSeverityCountLast SeenStatusThreadsTop Impacted ServicesDiscussion SignalLatest Thread Note
TraefikServiceHighErrorRateCritical22026-04-06 00:44Seen this week0core-grafana-80 (2)None
KubeJobFailedWarning442026-04-11 03:30Seen today0update-recurenta (44)None
TraefikServiceHighLatencyWarning32026-04-10 17:25Seen today0ai-api (3)None
KubeJobNotCompletedWarning122026-04-09 03:20Recent (72h)0update-recurenta (12)None
KubeAggregatedAPIDownWarning152026-04-06 17:34Seen this week0metrics-server (15)None
KubeHpaMaxedOutWarning62026-04-08 02:53Seen this week0grafana (6)None

Status is heuristic. Slack rarely posts explicit resolutions, so “Seen today” or “Recent” means the alert family still appeared in production recently, not that it is definitely unresolved.

AWS Email Alarm Families

AWS AlarmEmailsALARMOKState FlipsFirst SeenLast SeenLatest StateStatus

“Flapping, latest OK” means the most recent email was an OK, but the alarm toggled repeatedly and is still a reliability concern.

Global Discussion-Derived Signal

Thread DateAlertSeverityServicesSignalKey Notes