Back to all posts
productengineering

Beyond the PR: Why We Built an Analytics Dashboard

Revvu Team/April 7, 2026/5 min read

It's Wednesday standup. Someone asks: "Are we writing better code than last month?" You've been using Revvu on every repo for weeks now. There are hundreds of review comments spread across dozens of pull requests — null checks caught, security gaps flagged, edge cases surfaced. You have more data about your codebase than anyone else in the room. And you have absolutely no idea how to answer the question.

That's the moment that started this feature. Each review told us something useful about one pull request — a missing null check here, a security gap there, an edge case that would have bitten someone in production. Individually, every comment was valuable. But collectively, they were just a pile. No single review could answer the question that actually matters: is the codebase getting healthier over time, or are we fixing the same kinds of problems on repeat?

Developer staring at a laptop screen filled with scattered notifications in moody blue lighting
Four hundred data points and no answer. The patterns were there — we just couldn't see them.

The question nobody could answer

We talked to early users and kept hearing the same frustration in different words. One engineering lead put it perfectly: "I love the comments, but I can't tell my team whether quality is trending up or down. I just have a vague feeling based on the last few PRs I looked at." A vague feeling. That's what hundreds of detailed, line-level review comments reduced to when there was no way to zoom out. You'd scroll through GitHub notifications, mentally tally up what the bot had been saying, and try to remember whether last month felt worse or better. That's not how engineering decisions should work.

The questions were obvious once we started listing them. Which repos have the most critical findings? Are those findings going down over time, or piling up? Is the bot catching real issues, or just generating noise? Which files keep showing up with the same problems no matter how many times someone fixes them? No individual PR review can answer any of that. You need the view from above.

Six numbers, one answer

The dashboard opens with six metrics, and each one answers a question you'd actually ask out loud. How many PRs got reviewed — is the bot keeping up with the team's pace? What percentage completed without errors — is the system reliable enough to trust? How fast are reviews finishing — fast enough that developers see the feedback before they've moved on to the next task? How many issues are being caught, and how many of those are critical — is it finding things that matter, or nitpicking whitespace? How many comments land on a typical PR — enough to be useful, not so many that people start collapsing every thread? And how many repos are actively connected — is the coverage what you expected?

Every metric shows a trend against the previous period. Looking at the last 30 days? You'll see how that compares to the 30 days before. It's the simplest possible answer to the question from standup: yes, we're getting better — or no, something changed and we should look into it.

The patterns hiding in your PRs

Below the numbers, four charts show what individual reviews never could. A volume chart plots daily review activity over your selected time range — and the stories it tells are immediately obvious. You can see the quiet stretch when half the team was at a conference, the spike after a big deploy when everyone was pushing fixes, and your team's natural shipping rhythm (turns out most teams push more on Tuesdays than Fridays — who knew). A severity breakdown shows the proportion of critical, warning, suggestion, and nitpick findings. If nitpicks dominate, that's a healthy codebase with mostly minor polish needed. If critical findings are climbing week over week, something upstream needs attention before it compounds.

Abstract data streams converging into distinct warm amber patterns against deep blue darkness
The difference between seeing individual trees and understanding the forest.

A categories chart surfaces what kinds of issues keep appearing — missing error handling, security gaps, null safety, performance concerns. And a file hotspots list ranks the files that accumulate the most findings, weighted by severity. These are your refactoring candidates. Not the files with the most lines of code, but the ones that generate the most problems no matter how many times someone patches them.

Your dashboard, your view

Every chart and metric responds to three filters: date range, repository, and review status. A tech lead checking 90-day trends across all repos sees a completely different dashboard than a developer looking at today's activity on one repo — same page, different lens. The filters live in the URL, so "our critical findings for Q1" is a link you can bookmark or drop in Slack. Not a screenshot. Not a spreadsheet someone exported last Tuesday.

The part that tells you where to look

We've all used dashboards that look impressive in a demo and don't change anyone's behavior in practice. Twelve beautiful charts, no clear takeaway, a weekly meeting to discuss what the charts might mean. We wanted the opposite — a dashboard that does the interpretation for you. At the bottom of the page sits an attention panel that watches for three specific signals: reviews that failed and might need manual follow-up before the PR merges, sudden spikes in critical findings compared to your normal baseline, and repositories where the issue rate is running noticeably higher than your other repos.

A single bright spotlight cutting through darkness illuminating one spot on an abstract surface
Dashboards you have to interpret get ignored. Dashboards that point get used.

It's the difference between "here are some charts, good luck" and "three things need your attention right now." The panel does the interpretation so you can skip straight to action.

From fixing bugs to fixing patterns

Building this changed how we think about Revvu. It started as something that does one thing — review pull requests — and does it well. The dashboard makes it something broader: a system that helps you understand your codebase over time, not just one PR at a time. "This PR has a null safety issue" is useful in the moment. "Your authentication module has had twelve null safety issues in the last month" is a different kind of useful entirely — it tells you that fixing individual occurrences isn't working and the module needs a deeper look. That's the difference between fixing a bug and fixing the pattern that produces bugs. One saves you an hour. The other saves you a quarter.

What's next

We're building toward team-level views so engineering leads can see patterns across an entire organization, not just their own repos. We're also exploring custom alert thresholds — a spike in critical findings means something different for a fast-moving startup than for a mature codebase with strict quality gates. The dashboard is live today. Open it, filter to the repos you care about, bookmark the view, and see what the aggregate tells you that individual reviews couldn't.