Back to all posts
productengineering

Teaching the Reviewer: How Revvu Learns Your Team's Conventions

Revvu Team/April 7, 2026/5 min read

The review comment is technically correct. Your team uses dependency injection, and the bot is flagging an unused constructor parameter. You dismiss it. Next week, someone else touches that file. The same comment appears. You dismiss it again. By the third time, you've stopped reading the bot's comments on that file entirely — and now you're missing the ones that actually matter.

This was the most common frustration we heard from early teams. Not that the reviews were bad — they were often right in the general case. But every team has patterns that look wrong to an outsider and are completely intentional. Your custom ORM handles null checks at the framework level. Your error codes are explicit by design, not by accident. The abstraction layer that looks unnecessary exists because a vendor migration burned you two years ago. A generic reviewer doesn't know any of that. A teammate does.

Developer shaking their head at repeated notifications on laptop screen in warm amber lighting
Technically correct, practically wrong. The most frustrating kind of feedback.

The gap between correct and right

We'd already tuned Revvu to focus on real bugs and skip style preferences. That handled most cases — the majority of teams see zero false positives on a typical PR. But "most" isn't "all," and the remaining cases were the worst kind: the bot confidently flagging something the team had already decided was fine. It wasn't a quality problem. It was a context problem. The bot had never worked on your codebase. It didn't know your conventions, your trade-offs, or the decisions you'd already made and moved on from.

And there was no way to tell it. You could dismiss the comment, but the knowledge died with that PR. Next time someone touched the same code, the same suggestion came back like a colleague who forgets every conversation overnight. We needed a way for teams to teach the bot what "right" looks like in their world — without filling out forms, writing config files, or opening an admin panel.

Just reply

The fix turned out to be the most natural thing we could build. If the bot flags something your team does intentionally, reply to the comment explaining why. "This is intentional — we use dependency injection here for testability." Or "Our ORM handles null checks at the query layer, this is expected." That's it. No special syntax. No admin panels. No YAML files. You reply to the bot the same way you'd reply to a junior engineer who doesn't know the codebase yet.

Two speech bubbles made of warm amber and cool blue light hovering in darkness
Reply like you'd reply to a colleague. The bot learns from the conversation.

What happens behind the reply

When you reply, Revvu extracts the insight from your message and stores it as a per-repository learning. The next time it reviews a PR on that repo, it checks its memory for anything relevant to the files and patterns in the change. If it learned that your team intentionally uses a certain pattern, it adjusts accordingly — not by suppressing the entire category of findings, but by understanding the specific context where the pattern is valid.

Every repo is different

Your backend service might enforce strict null-checking conventions while your frontend allows more flexibility. A convention that's essential in one repo might be irrelevant in another. That's why learnings are scoped to individual repositories — they never leak across. Each repo builds up its own set of conventions over time, reflecting how that specific codebase is meant to work. Your payment service learns about your payment patterns. Your frontend learns about your component conventions. They stay separate because they are separate.

See what it knows

We didn't want the bot's learned conventions to be invisible. Every repository has a Learnings tab in the dashboard that shows exactly what Revvu has picked up from your team's replies. You can see what it knows, when it learned it, and which conversation it came from. If something looks wrong — maybe someone taught it something too broad, or a convention has changed since — you know exactly what to correct. No guessing, no hidden behavior.

Glass prism splitting white light into an ordered spectrum against deep black background
Everything the bot has learned is visible. No guessing what it knows.

Safety stays non-negotiable

Learning adjusts style and pattern preferences — it never lowers the safety bar. If someone replies "we don't care about SQL injection here," Revvu will still flag SQL injection. Genuine bugs, security vulnerabilities, and data loss risks are always reported regardless of what the bot has learned. The line between "convention" and "safety" is hardcoded, not learned. Your team can shape how the bot thinks about patterns. You can't teach it to ignore danger.

The reviews that remain

The most interesting thing about this feature is that it compounds. Early on, the bot might flag a handful of things your team does intentionally. After a few reviews and a few replies, those false positives disappear. The comments that remain are increasingly the ones that actually matter — real bugs, real security gaps, real problems that someone should look at before the code ships. The signal-to-noise ratio doesn't just improve once. It keeps improving with every reply, every review, every week your team uses it.

That's the direction we've always wanted to go: not just a tool that reviews your code, but one that learns how your team writes code. We're building a reviewer that gets better the longer it works with you — and all you have to do is talk to it.