While I am a proponent of code reviews, what this article actually described is mentoring of a junior engineer by a senior engineer, and requiring effective testing to be implemented alongside the feature.
It also shows a broken culture where the other reviewers were not engaged and committed to the success of the feature. When a bug escapes, it's both the author _and_ reviewer at fault.
Disagree. It's only the author's fault. We can't expect engineers to write code and find every last bug in other people's code especially when PRs can be thousands of lines to review.
A broken culture would be expecting engineers to find every last bug in other people's code and blaming them for bugs.
Our code review process involves a reviewer explicitly taking ownership of the PR, and I think it's a reasonable expectation with the right practices around it. A good reviewer will request a PR containing 1000s of lines be broken up without doing much more than skimming to make sure it isn't bulked up by generated code or test data or something benign like that.
When issues happen, it is almost never just one person's fault.
Just trace the Whys:
1) Why did the bug happen? These lines of code 2) Why wasn't that caught by tests? The test was broken / didn't exist 3) How was the code submitted without a test? The reviewer didn't call it out.
Arguing about percentages doesn't matter. What matters is fixing the team process so this doesn't happen again. This means formally documented standards for test coverage and education and building team culture so people understand why it's important.
It's not "just the author's fault". That kind of blame game is toxic to teams.
He's obviously self-praising for a promotion or looking for another job. I don't think it should be taken seriously on the matter of the effectiveness of code-reviews
> I also pushed for breaking large changes into smaller commits, because it is impossible to do a thorough review otherwise.
I've found this to be key for both giving and receiving feedback. Even small breaking commits are better in a lot of cases because they invite feedback in a way that 1000+ lines don't.
I miss Phabricator from my time at Meta so much, I made this to achieve a Phabricator-like stacked-commit experience via the git cli: https://pypi.org/project/stacksmith/
This looks really interesting. I've been using a similar tool called spr https://github.com/spacedentist/spr for the last six or so months at work. I really like the stacked diff/PR workflow. But spr has a lot of rough edges and im on the lookout for a better alternative.
Do you happen to know how your tool compares to spr? How production-ready is it?
Breaking down the size of the change is truly important, otherwise it's easy to miss things and to also disregard them as little details when wanting to avoid blocking the whole change on a "small" thing (which may only seem small because the PR is now huge)
The author describes how his code reviews that he gave others are successful from his own point of view.
But he does back it up with actual facts (as far as we can trust the author to tell the truth) - the feature that the author gave feedback on shipped without any issues. (The article actually doesn't say whether A was fixed-and-bug-free before B shipped, but it certainly sounds like B was less stressful to ship.)
It’s also a biased view. The author admit that the feature he was involved in took longer to ship initially. Depending on the environment this can be an anti-pattern; don’t we say “release early, release often”.
In the same vein; the author says that the other feature took several releases to be stable. Were the other release purely bug fixes or did that give the engineer a chance to get early feedback and incorporate that into the feature ?
It’s clear that the author prefers a slow and careful approach, and he judges “success” and “failure” by that metric. It sometimes is the right approach. It sometimes isn’t.
Don't worry, he also asked his own reviewee, who said the reviews were helpful and in no way obnoxious.
My statement still stands true regardless. Not worried.
Only somewhat related, but I'd pay decent money to have access to the whole Piper/CitC/Critique/Code Search stack. As much as I've tried to like it, I just don't really like Github's code review tool.
Github's code review tool is uniquely bad. Notably it presents every comment as blocking and requiring sign off - even a "Glad someone cleaned this up! <thumbs up emoji>" needs clearing before merge.
It also has some UX snafus that cause reviewers to write a number of comments and then forget to publish any of them, leading to a lot of interactions along the lines of "I thought you were going to review my PR?" "I already did?"
Requiring every comment to be resolved is not a standard part of GitHub’s code review system. That is something which your organization has gone out of its way to require.
Former Googler. I also miss Critique/Gerrit. I've tried a bunch of alternatives, and I like CodeApprove:
It's great if you have a team that does code reviews. It works less well for reviewing contributions from external contributors on an open-source project,a as the contributor likely just wants to get their PR merged and doesn't want to learn a special reviewing tool.
No affiliation, just a happy customer.
"lookup table of similar technology and services to help ex-googlers survive the real world"
Do you know if this works with Azure DevOps? I hate their UI. At this point I'd love to use Github. But for some reason the higher ups want us to be on Azure DevOps.
Shameless plug, but we built http://CodePeer.com - to bring Critique like features to everyone else. Take it for a spin if you like!
Shameless plug but since you asked ... CodeApprove (https://codeapprove.com) is probably the closest thing you can get to Critique on GitHub. It doesn't help with the Piper/CitC/Code Search parts though, and I agree those were excellent.
They are not antagonistic in nature! Where did they get this idea?
I would like to offer a review on this article. Naming people and projects single characters makes it difficult to follow the content.
I stopped reading after that opening paragraph. I don't know of anyone I take seriously who thinks that code reviews are bad practice or pure red tape.
I’ve never worked somewhere where mandatory PR reviews didn’t turn into mostly red tape.
The pressure to get work done faster in the long term always wins out over other concerns and you end up with largely surface level speed reviews that don’t catch much of anything. At best they tend to enforce taste.
In 20 years across many companies and thousands of PRs, I’ve never had a reviewer catch a single major bug (a bug that would have required declaring an incident) that would have otherwise gone out. I've pushed a few major bugs to production, but they always made it through review.
I’ve been doing this since well before everyone was using GitHub and requiring PR reviews for every merge. I haven’t noticed a significant uptick in software quality since the switch.
The cost is high enough that I’d like to see some hard evidence justifying it. It just seems to be something people who have never done any different take on faith.
> In 20 years across many companies and thousands of PRs, I’ve never had a reviewer catch a single major bug
Good thing reviews aren't just about catching bugs.
Ask 5 people about the purpose of mandatory PR reviews and you’ll get 6 answers.
However catching bugs is always going to be at or near the top of list, so clearly it’s at least partially about catching bugs.
I’d argue that catching bugs along with ticking a compliance checkbox (which is only there because something thinks they catch bugs and malicious code) are the 2 primarily reasons that the business part of the company cares about or requires code reviews in the first place.
I know an idi*t who claimed, in a code review, that there was a memory leak just by looking at the code (turned out there wasn't). Clearly it was a bullying attempt to stop someone else's progress. Unfortunately it was successful because of people like the ones downvoting you.
Then you are very lucky. I definitely have met those sorts. I’m even aware of teams that collectively push to the main brain under the promise that they’ll probably look at each other’s code later, maybe.
I saw no proof of the later review for all pushes.
Mandatory code review definitely creates red tape. Every place I've been with mandatory code review, I always see people "looking for a stamp".
At my current job, code review requirements are set on a per-folder basis, with many folders not requiring a review. People ask for a review because they want a review (or, sometimes, they dont. For example, I don't ask someone to review every quick one-liner of the systems I am an expert in).
You would be surprised! I have encountered the attitude that code reviews are a waste of time. It's not common, and I have never seen this attitude "win" across a team/company but it definitely exists. Some engineers are just overconfident, they believe they could fix everything if everyone would just let them code.
Same.
> Code reviews have a bad rap: they are antagonistic in nature and, sometimes, pure red tape.
I wonder if folks know that this is a job? What are you gonna do, not do it? Cry at night because you forgot for the hundreth time to add token strings to translation files and not be hard-coded? Come on.
A few days ago there was an article on HN on how engineers abuse code reviews. It's just a tool, the outcome is different based on who's reviewing. If you think code review is intrinsically good then I'm glad I'm not working with you either
Thankfully I'm not working with you
AI is pretty good at code reviews. For reference I use chatgpt and Gemini. It's very helpful.
An AI tool that could convert large scale changes into a set of small commits would be amazing.
Are you using AI for public contributions? Or a private repo? How does it deal with things like project conventions? e.g. "Currency should be represented using a Money class", "This method should use our utility class", etc. Do you upload the entire project's source code?
The review you get depends what questions you ask. For example: I wrote a class that wrapped a std::vector<T> as a private data member and it pointed out that it would be nice if I implemented support for accessing the iterators and the array subscripts. It made these remarks based on how I was using the object. I have uploaded an entire repo to Gemini (as a single file) and asked for broad and fine reviews. It's really quite good.
In my experience, mandatory code reviews seldom work very well. Usually it's either stamping PRs or you always run the risk of someone just blocking stuff in an unreasonable manner.
For the most part, code reviews should be optional - if you want to get a review from someone, tag them on your PR and ask. If someone you didn't tag spots something and your PR landed, you can always figure it out and still make a fix.
I will give an exception to maybe super fragile parts of the code but ideally you can refactor/build tests/do something else that doesn't require blocking code review to land changes.