Back to all posts
engineeringarchitecture

Inside the Engine: How Revvu Reviews Your Diffs

Revvu Team/April 7, 2026/5 min read

When someone asks how Revvu works, the honest answer fits in one sentence: "GitHub tells us about a PR, we analyze the code, and we post comments." That's true. It also skips the interesting parts — the ten discrete steps between "PR opened" and "comments on your screen," the trade-offs behind each one, and the outage that taught us which steps actually matter.

This post is a look inside the engine. Some of it gets technical, but we think it's worth sharing — partly because the design decisions might be useful if you're building something similar, and partly because when a tool reviews your code, you should know how it thinks.

Abstract light streams flowing through parallel channels in deep blue darkness
From webhook to review comment in under 90 seconds. Here's every step in between.

The handoff

When a pull request is opened or updated on a repo where Revvu is installed, GitHub sends us a webhook notification. The first thing we do — before anything else — is verify the request is actually from GitHub and not someone spoofing a payload. Once verified, we put the job in a queue and respond back immediately. The whole handoff takes under 50 milliseconds. We don't do any analysis during this step. No API calls, no database writes, no thinking. Just "got it, we'll handle it." This is intentional. Keeping the handoff fast and empty means we stay reliable even during traffic spikes — a Monday morning surge of PRs doesn't slow down the acknowledgment for any of them.

Why every step runs alone

The review pipeline touches several external services — GitHub's API, our AI model, and our database. Any of them can be slow, rate-limited, or temporarily down. If we tried to do everything in a single request, one failure anywhere would mean re-running the entire analysis from scratch. Instead, each step in the pipeline runs independently and can be retried on its own. If posting comments fails because of a rate limit, we retry just that step — we don't re-analyze the code. If the AI model times out on a large change, we retry the analysis without re-fetching the diff from GitHub. Every step's output is the next step's input, and nothing is wasted when something goes wrong.

Magnifying glass hovering over documents with focused light creating clarity in surrounding darkness
Each step runs independently. A failure in one doesn't waste the work of others.

The full pipeline

Here's what happens, step by step, from the moment we receive the webhook to when you see comments on your pull request:

Step 1:   Fetch the code changes from GitHub
Step 2:   Create a status check on the PR (shows "review in progress")
Step 3:   Gather context — who wrote this, who usually works on these files
Step 4:   Check what the bot has learned from past feedback on this repo
Step 5:   Fetch any previous review comments we've already posted
Step 6:   Send everything to the AI model for analysis
Step 7:   Compare new findings with existing ones (what's new, what's fixed?)
Step 8:   Post new comments and mark fixed issues as resolved
Step 9:   Update the PR description with a review summary
Step 10:  Mark the status check as complete and save to the database

Context makes it useful

A raw list of code changes tells you what changed but not why it matters. We enrich every review with additional context before the analysis step: who wrote this change, who usually maintains these files, what the full file looks like beyond just the changed lines, and what the bot has previously learned about this repository's conventions. This is the difference between a comment that says "this might be null" and one that says "this might be null, and since this function handles payment processing, that could cause a silent billing error." Same finding, completely different usefulness. Context turns generic observations into something a developer can act on immediately.

Not everything is critical

Not every step in the pipeline is equally important. The status check — the green or yellow badge on your PR — is nice to have, but if GitHub's Checks API has an issue, we don't want the entire review to fail because of a badge. So we treat non-essential steps as optional: if they fail, the review continues without them. Your comments still get posted. Your review still gets saved. The dashboard still gets updated. We learned this the hard way. An early incident caused every review in the queue to fail because a non-essential API was down. The actual analysis was fine. The comments were ready to post. But the pipeline was configured to treat every step as mandatory, so everything stopped. Now the critical path is protected, and optional features degrade gracefully instead of taking down the whole system.

Thirty seconds to done

Most reviews complete in 30 to 90 seconds. The AI analysis step takes the bulk of that time — that's where the actual understanding happens. Everything else is optimized to stay out of the way: the webhook handoff is instant, the context enrichment runs in parallel where possible, and comment posting is batched. For most teams, the review is posted before anyone has opened the PR in their browser. You push, you context-switch to the next thing, and when you come back there's already feedback waiting. That's the goal — reviews that arrive fast enough to be useful, not fast enough to be impressive.