We Built a Slackbot That Detects Delivery Bottlenecks

Earlier this year, we were supporting a mid-size B2B SaaS client with a distributed engineering team working across 3 time zones. They were growing fast, shipping features weekly, but had hit a ceiling with delivery predictability. Standups were filled with status reporting, PMs were pinging devs for updates, and no one had a clear picture of where work was stuck.

We had already helped them streamline some CI/CD pipelines and improve backlog grooming, but visibility remained a blind spot. And with 20+ active epics, that blind spot was costing time and confidence.

The Problem

Velocity wasn’t the issue; clarity was. Work was getting done, but no one could reliably answer:

  • Where are things slowing down?

  • Are we blocked on reviews, deployment, or business decisions?

  • Are some squads more stuck than others?

Jira dashboards existed, but were noisy. Status updates in Slack were manual, inconsistent, and reactive. The team needed insight at the moment the problem occurred, not two days later during the sprint review.

The Task

We set out to build something simple: a Slack-native way to surface delivery friction in real time. Not a full analytics dashboard. Not a project management replacement. Just a lightweight, internal tool that told us when something was getting stuck, and why.

The goal wasn’t automation for automation’s sake. It was awareness.

The Actions We Took

1. Started with signals, not software

Before writing a line of code, we made a list of simple signals that often hinted at delivery friction:

  • PRs open for more than 48 hours

  • Stories in "In Progress" for more than 5 days

  • Tasks marked "Done" but not deployed

  • Tickets reopened or bounced between QA and Dev multiple times

We validated these by looking at the real sprint history. The pattern held. Bottlenecks left trails.

2. Defined thresholds per squad

Not every team works the same way. Some practiced trunk-based development, others used long-lived feature branches. We met with each squad lead and defined their own “what’s too long?” for each metric.

This small step avoided false positives later and increased team buy-in.

3. Built a Slackbot with real, readable alerts

We created a lightweight Node.js service that ran every few hours, pulling data from the client’s Jira and GitHub accounts via API. The Slackbot posted structured alerts to each team’s delivery channel:

🚧 Heads up: 3 PRs have been open > 2 days
🔁 Ticket #ENG-4215 has bounced QA > 3 times
🕐 Ticket #ENG-4170 is in progress 6+ days

We kept alerts brief, human-readable, and non-judgmental. No graphs. No tracking people. Just delivery signals, in context.

4. Added opt-in context and feedback

Anyone could emoji-react with ✅ to dismiss a flag, or reply to explain why something was “intentionally stuck” (e.g., waiting on compliance, exec input, etc.). This started helpful team conversations organically, without needing a PM to chase anyone.

5. Monitored and iterated

We rolled it out to one team first. They liked it enough to keep using it. Within 3 weeks, two more squads opted in. Every few days, we adjusted thresholds, cleaned up Jira field mappings, and improved wording based on team feedback. The goal was low-friction, ambient awareness, not alert fatigue.

The Results

We didn’t “fix” delivery, but we made it more observable.

  • Teams started flagging blockers proactively before standups.

  • PMs reduced status pings by ~40% (their estimate).

  • Two PRs got reviewed faster after alerts triggered a reminder in Slack.

  • A recurring QA loop was identified and resolved after a single alert surfaced it.

The bot became part of the team’s rhythm, not as something to “check,” but something that checked in on them.

Also important: we never called it a productivity tool. It was a pulse check. That framing helped adoption.

Takeaway

You don’t always need dashboards to understand delivery health. Sometimes, the best insights come from watching for particular friction signals and delivering them in the place people already live: Slack.

If you're struggling with visibility, don’t start with a new platform. Start with one alert. Then listen to how the team responds.

Previous
Previous

How We Embedded GenAI into a Legacy Claims System in 4 Weeks.

Next
Next

Design Systems Aren’t Just for Big Companies.