> In your first code review passes, pick small and simple nits. Variable names are a bit unclear; comments have typos in.
> Wait for the developer to fix those, and then drop your bombshell: there’s a much more fundamental problem with the patch, that needs a chunk of it to be completely rewritten – which means throwing away a lot of the nitpick fixes you’ve already made the developer do in that part of the patch.
I've done this more than once, and felt pretty bad about it each time. The problem is that I'm pretty good at spotting the little issues while I'm reading through some code. But I'm not as good at identifying the larger design issues until I'm thinking about it later
This is why I like Github's feature to add comments before posting the full review. You can do a round of nitpick comments, take some time away to let the rest settle, come back and re-review with the big picture. You can then add those comments as well, remove some of the nitpicks if they aren't relevant/important enough anymore, etc.
To be fair, it’s often difficult to identify the important problems when the code is very messy. I don’t know if that’s just my hole-ridden brain, but sometimes after I clean up a piece of code it unlocks a new level of understanding and lets me spot issues that seemed completely invisible beforehand. It’s why when programming I put a lot of effort into code formatting and structure (eg. function extraction and inlining, and other transformations that don’t alter behavior).
I had a coworker who would leave nitpicks and bombshell in the same review round. Like 30 comments, 29 of which are irrelevant if you read the last one.
Couldn't explain the problem to him... I just learned to read through full review from him first before starting to address anything. Otherwise he was one of the best people I worked with and the feedback was valuable.
What really is the alternative? Just post the big bombshell thing and not express your opinion about stuff? Often it's the expressing opinion part that's more important than the actual nits.
If you take the time to spell out generic complaints, it feels bad to throw that away. I think that a good compromise is to have a system that allows a summary of the review to be posted with the comments, so you can mention the important thing up front.
It isn't even about feeling bad, it's that if they write it they're going to make the same mistakes again in a little bits. If someone misspells a word habitually and you think maybe it should be spelled correctly (or whatever fix) then maybe that needs to get pointed out needs to be taken into account when they do some big rewrite.
> This whole thing won't work because of X and you need to pretty much rewrite all of this.
>
> Rename foo to bar.
> Typo.
> Missing whitespace.
So yeah... Just don't post the rest? I often go back to earlier comment drafts and edit / delete them when I realize deeper into the review they are no longer relevant. The review comments should not be a stream of consciousness but properly written feedback.
> The review comments should not be a stream of consciousness but properly written feedback.
My advice would be to not take it so personally and assume good intentions from your reviewers.
If someone gave me feedback that caused a big course correction then I’d slap my forehead for missing it, thank them, then move on with my life. The extra feedback of other issues is a bonus. Asking them to spend more time reviewing their review is wasted effort.
In my mind it's important to have both, as long ad the main issue is made clear and nitpicks are truly nitpicks (and thus acceptable in limited quantity to not block a change).
Otherwise you'd have a "bombshell" to refactoring, then on the 2nd review pass a handful of nitpicks that you could have addressed during the initial refactor.
I find nitpicks to not only be useless to the author, but to actively add noise that either masks any real issues you want noted or just give the author meaningless work. They seem more like a tonic for the reviewer to be able to say something to indicate they did look at the code and just result in wasted time.
It depends on the nitpick. If it's something like a spelling error, then that's worth fixing.
If it's formatting or something that can be linted, then that should be automated.
If it's code style, I generally let that slide if it's just a difference in opinion, or I'll try to have a discussion with my coworker about our preferences (though I would still approve the PR). If it's genuinely bad code or a junior dev then it makes sense to provide feedback.
> If it's something like a spelling error, then that's worth fixing.
My take on this is that if it's worth fixing it's not a nitpick. Small problems are still problems.
Automated stuff, if not automated, is also an issue that should be fixed.
Teaching opportunities are also not nitpicks, as teaching is important.
For me the acid test for a nitpick is that if I wouldn't add a commit just to fix it, it is not worth a comment. I'd add a commit to fix a spelling error, but not to tweak some variable name. This is assuming a real commit and not something you're planning on rebasing away.
If you define "nitpick" as "something that's not worth fixing" then by definition nitpicks are worthless.
I've heard the term nitpick usually used to mean "small comment", "my preference", or "very minor".
> Teaching opportunities are also not nitpicks, as teaching is important.
Preference usually isn't a teaching opportunity, it's just a minor disagreement. It's trying to make your code look like I (the reviewer) wrote it. For example maybe you like more terse names than I do. We can argue about every point of personal preference, but that's not a great use of our time.
> If you define "nitpick" as "something that's not worth fixing" then by definition nitpicks are worthless.
> I've heard the term nitpick usually used to mean "small comment", "my preference", or "very minor".
Those are all things that are not worth fixing/changing and so that is just a different way of restating it. I.E. "small comments", "my preference", "very minor" things are all worthless in a PR. They might be interesting when hanging out and chatting, but for a productive workflow they are just noise.
Same with preferences... if you are just relying a preference, not worth it. If you are teaching a better idiom or similar, then it might be. This is not hard and fast, just that in general most "nitpick" types of PR comments are a net loss.
Nitpicks are explicitly declared as such and since everyone participating the PR know that they aren't meant to be blocking at all and are encouraged to ignore them if they take too much time than they are worth.
No one's perfect; it's OK to make mistakes, so long as you identify that it's a mistake to yourself, and apologize for your mistake to others.
It's only when a reviewer does this (or the various other sins listed here, or a few more besides) without acknowledging it/displaying empathy/working to help unblock that I start giving my resume a refresh.
Focus all your effort on catching errors that a linter and auto-formatter could catch. Shrug when questioned about architectural decisions, what really matters is that this field was PascalCase when it should be camelCase.
When questioned why this isn't caught automatically by the build process, you have a few stock responses (Bonus: These may well be true!):
- Linters are too slow, and a fast development experience is crucial
- Linting the entire existing code base would effectively destroy "git blame" as a useful tool.
- Different teams can't agree on whether to column-align wrapped function arguments, so they've agreed to disagree on this.
So instead of automating away the mental overhead of checking for spaces vs tabs, or LF vs CRLF make sure you go over the PR with your own linter and nitpick every violation, even the suggestion level ones before you even consider trying to work out the validity of the submission.
> Linters are too slow, and a fast development experience is crucial
It's been, literally, 15 years since I've used an editor without linting built in. If someone tells me this, then it just means they don't gaf, and there will be bigger problems with their code.
> Linting the entire existing code base would effectively destroy "git blame" as a useful tool.
Put a '.git-blame-ignore-revs' file at the root of your project with commit hashes (and # comments). GitHub and other popular git tools will consider it.
We have this, but have a handful of devs who can't be bothered to install the hooks locally so deal with their PRs by fixing one linter error at a time and pushing a new commit to see what fails in the CI.
I’m glad I’m not the only one who has to put up with fools like this. Not only the insane git log, but the compute for those actions isn’t free either. Just run the damn precommit hooks, it literally fixes the issues for you!
The most productive team I worked on just ignores this shit in code reviews.
Like we will suggest stuff if the obvious formatting is wrong in a way the author probably didn't intend, but we're not going to hold up a code review on a nit. If there's one variable that's Pascal case instead of camel case or something that's not going to stop the code from going in if it's good code from algorithmic and documentation perspective. We just vaguely try to be consistent in each file and act like adults.
We really agreed on value. We would insist that code that could be tested had tests. It was a C++ code base so we insisted that things be as fast as reasonable suggesting proper algorithms and data access patterns. And in general actually cared about bugs and making then shallow.
Another point of agreement was that an automatic code formatter almost always made it look worse than having a few nits here and there. Anyone who mandates the things that really don't matter is just getting in the way.
I think it's kind of important to be consistent about case because if not, people will trip when typing, it's annoying and might even cause bugs.
One thing I agree though, it's a waste of time to discuss about it. That problem is best solved with automatic tooling / linting. In the editor, git hooks and CI.
But if you and your colleagues don't have a problem with it, then I guess it's fine.
> This particular patch seems especially important to the developer who submitted it. (Maybe they said that outright, as part of their argument trying to persuade you to take the patch. Or maybe you just read between the lines.)
> But this patch isn’t especially vital to you – so you’re in a position of power! Now you can hold the change they need hostage until they do lots of extra tangentially related work, which doesn’t really need to be in the same commit, but which is important to you.
The reviewer outright denies this occurs, and it occurs every time.
For a commit on changing the documentation, he will then want the entire documentation redone, including pages that aren't being changed.
And will refuse to approve it until every issue he's brought up has been addressed.
I've just stopped asking them for PR reviews, and ask other teams to review my PRs.
I've been guilty of this too, and generally I'll write something along the lines of "this would be nice to do, but it's out of scope unless you want to take it on".
My issue is, if you want other things done, make issues for them. Don't try to address every single thing you don't like, that is remotely around the code I am trying to get this fix in for.
Like, he has great feedback. It's not the quality of the feedback or content, it's that it has nothing to do with what I currently need urgently implemented to fix a problem.
I‘ve had a platform lead constantly try ransoming small refactorings into feature PRs, because he found out that’s the only way of achieving continuous architectural improvements. (Medical product, so the process was rigid)
No, but I don't go to management to handle personal issues. I just handle them. When you go to management, you have to accept management's decision. When you just make decisions, you get autonomy.
The death of a thousand round trips has an inverse where the reviewer points out everything they can find, and then the author fix just one thing. You might think the author is motivated to minimize round trips, but maybe they will pull the reverse ransom note next where they threatens to find a different reviewer or subvert the code review somehow, because "it's an emergency and the patch needs to go in now".
This totally happened to me all the time in prior jobs with cultures of low quality and lack of ownership.
When I review code, I'm always careful to prefix low-importance things with "[suggestion]" or "[nitpick]". These are small things like typos in comments that would not block me from approving the view.
So people who know me know that I don't flag things frivolously. However, I would often review someone's code (someone I know and respect, mind you) with 3-5 changes that MUST be made before I would approve it. Because not fixing it would either be a bug, or would make life unnecessarily difficult later on. They would fix ONE of the things and I would have to point out that they didn't address any of the other things. And then they would go back and fix one more, and the cycle continues until they FINALLY get all the things addressed.
Only once have I gotten worn down and approved substandard code because I was tired of all the back and forth. In that case, it was a student whose paid internship was almost up and he had to have it deployed before he left. (We never ended up using that project...)
I 100% would have closed the MR with a note akin to "If you don't want to work you can just say so". That stuff doesn't fly at all in my teams, it's a disservice to everyone else it's insulting.
I see a lot of these comments boiling code review like this down to "picky reviewers" or "lazy submitters", but I have first-hand experience with something that doesn't fit either of these explanations, and it's when code review like this becomes used as a political weapon.
In my case, another engineer that was gunning for my position as tech lead started to use several of the tactics listed in OP's post to delay and bog down my pull requests (where previously we had kept a good relationship) while they were getting their PRs rubber-stamped by a junior on our team while I was asleep (America vs. European hours).
Management noticed and permanently soured on me and I was never able to recover from the damage done. I didn't fully realize what was happening until it was too late. I was laid off less than a month later and the other engineer was promoted. It was an extremely valuable lesson, though, and I got a better job inside of a month later, anyway, but it's something I keep a close eye on now.
Another one I see is the insistent nitpicker who never accepts comments on their own PRs.
FWIW:
1. Use a formatting tool and a linter in the build chain = zero formatting bullshit nitpicks.
2. Ask questions wherever possible rather than criticising. It's kinder and also less embarrassing when you thought you saw a bug and it wasn't.
3. This is the reviewer's chance to stay uptodate with the codebase.
4. This is the reviewee's chance to share blame for mistakes. Nobody who reviewed can crap on them for a bug discovered later. People who couldn't bother to review can't complain either.
5. Make positive comments whenever you can honestly do so - just reduces the stress and fosters goodwill.
6. People who behave like arseholes in PRs are usually the kind you don't want in your team. i.e. it's a way of detecting such people - see how they use a bit of power.
One thing I also like to do is send a Slack message or leave an extra comment on a PR with my overall feelings/praising anything I can.
Also, the note about the formatter/linter is gold. I'm amazed at how many teams just live with constant nitpick comments on their PRs rather than automating these checks.
Doing a good code review is hard. You cannot do it in one pass, but of course projects are always late so there is pressure to approve. If you want code reviews to be good you need a policy that the reviewers need to approve it 3 days in a row without changes, but of course management wants to ship it and get on with the next feature.
It is easy to find nits. Most people will notice a spelling error out of the corner of their eye and zoom in on it. (I'm a terrible speller and I do this all the time). It is much harder to notice that the code is wrong and has bugs. I've passed many code reviews with a few nits to fix and push, only the next day did I realize there were serious problems with the code.
This is the root of all evil. The artificially created deadlines that make something that can be resolved next week "urgently today" just to increase your adrenaline levels so you work 30 mins more today is what causes a lot of software to end up being a piece of shit.
Software is about deep thinking, and comprehensive understanding. Yes, it's all about trade offs, but there is a point where things can't be any faster.
I think you are missing something: wanted features always exceeds time. You need some form of deadline to force your to think about what features to cut. Nobody wants to cut any feature that could make it in until forced to and so you are always late.
The deadlines are artificial, but once they exist they force a lot of real scheduling on many others and so they become real. In theory you can change them, but in practice people are depending on you.
Nothing is wrong with what you said, but keep it in context of the above.
GP here, I don't disagree with you, engineering is all about trade-offs and you can't engineer things to perfection by taking months to solve easy problems. You need progress, and efficiency so sometimes less important problems need to be fixed with less than ideal hacks etc. I agree with this, but having a culture of "everything is due yesterday", "if it can be done in 6 months, tell customer it'll be ready in 2 months" etc just to create stress and increase productivity lowers the engineering quality exponentially more than it increases productivity and expediency.
I agree on one level, but I also know that I'm a person who needs to have some kind of external incentive or else I'll just mire myself in endless rewriting and refactoring; unclear whether that has to be a boss/customer wielding a deadline or if it's enough to have a colleague who leans slightly more to the other end of the practical-idealistic continuum, but I know my best work happens with at least some kind of pressure being applied.
I was recently diagnosed with ADD, and learning that about myself explained a behavior of mine which seems diametrically opposed to yours - the pressure of a deadline makes me panic, and somehow my outlet is bikeshedding.
Give me daily and weekly commitments, and I'll do my most focused and highest quality work.
If ever there is a time to avoid bikeshedding consciously, it is during code review. Easy nits are of proportionally relatively small value. It's not misspelled comments or "not quite the variable name I would have picked" that brings down systems in general.
One thing I depend on in my reviews is that the programmer has indeed verified that it does what it says it does, preferably via some unit testing. That frees me to consider architecture things and whether there's a lurking bug without having to review whether it works. This has been particularly helpful in our Infrastructure-as-Code reviews. Many such reviews are post-facto; they've already been run, we know it brings up the right infrastructure already, and the review is to make sure it's architecturally heading in the right direction and maybe that tagging standards weren't accidentally violated and other high level, important things, not whether we like the particular name of a subsystem.
Getting this balance is probably the hardest thing about code reviews.
Also, one of the main things in code review is having extra eyes validate that nobody's putting obvious back doors in the code. If you've done that, you've brought some value even if you do nothing else. If a bug gets past the person authoring the code, it's not a huge shock that it'll get by someone who is not in that headspace, doing something else, and has a completely different context loaded. The idea that it wouldn't borders on magical thinking. But at least you don't see any "eval(base64_decode('big long string'))" today. The goal of reviewing is just to up your team's game a bit, not guarantee on their own that no bugs will make it to QA or prod or customers. They're part of the portfolio and shouldn't be treated as anything more.
> If you want code reviews to be good you need a policy that the reviewers need to approve it 3 days in a row without changes
I may be misunderstanding, but it sounds like you’re dealing with very complex PRs/commits.
I think the first problem to solve is get your commits to a reasonable size, so that you don’t need 3 days worth of approvals and multiple passes for review. (That sounds insane to me!)
My typical commits are maybe 100-200 lines (many of them much smaller), so needing 3 days of approvals would be way overkill. And a single pass is plenty.
Wait, your commits are hundreds of lines, or your MRs/PRs?
I like to keep commits as atomic as I can, not always succeeding, but often cutting commits to leave the repo in working state after a commit. Very rarely commits become 100 lines of changes. Perhaps those are variable renamings of things that are used in many many places? Or perhaps some wip commit, just making sure work is commited at the end of the working hours?
A single pass often cannot work, if there is a single improvement to be made, since that implementation of the improvement needs to be reviewed again, unless it is already literally a suggestion of code by the reviewer.
For me, changes are as small as possible, however it depends what I'm working on. Currently I'm doing major refactors to get a 1.5 MLOC monolith codebase onto a new runtime. This involves significant refactoring. You don't want to change many things at once, so one refactoring for example might be to make a commit which removes a pattern that will no longer exist in the new runtime. That can easily be 2000 files changed for a total of 10,000 lines. The commit is atomic and does one thing and would not make sense to split up.
With that said, even though it's the same atomic change in each file, the reviewer would still need to check them all.
My policy on code reviews is: my brain is a bad interpreter. Either I run the code and see that it works, I trust the tests (see #1 why this doesn't work), or I rubber stamp it and move on. Which I do mostly boils down to what the risk/impact of a bug in that part of code would be, and I've been around long enough to have developed the intuition to usually be right.
What kind of load is code review bearing in your process that you want 3 days of active work to approve? Some mixture of automated validations, acceptance testing, and manual regression testing should cover whether the code is correct beyond review.
That seems like a level of rigor that only systems with very significant failure cases could sustain and still be built.
Failure is expensive if customers go elsewhere. I'm not in the web world where we can push to production multiple times per day. I work with embedded systems which are not even in range of a cell phone signal in some cases (we are looking at starlink), so updates mean a human needs to physically go out and update the device, and in turn updates are expensive.
> I'm a terrible speller and I do this all the time
Why are you not using a spell-checker in your IDE? I don’t get why people don’t do this, it’s so easy, and it makes you look a lot more professional when you aren’t submitting constant spelling and grammar errors.
FWIW I would never bring it up in code review though, waste of time.
Most of these, maybe all, have a dual pull request antipattern.
> The Ransom Note
A developer submits a patch that's especially important to them. When you point out that additional changes are needed elsewhere to make the component stay logically consistent, you get told that those changes are unrelated to the problem being solved (their problem being solved).
> The Guessing Game
A developer submits a substantial patch. They contents of the patch indicates that the submitter has not understood the design of the code they're modifying. When the reviewer communicating this back to the submitter, the submitter expects the reviewer to drop what they're working on and instead work on designing the feature the submitter needs.
> The Priority Inversion
A large patch has been submitted with poor explanation of the underlying design. Properly reviewing if the patch makes sense takes time. The submitter (publicly) complains that they (verifyably) aren't getting responses to their code reviews and that this is holding back feature X.
> The Catch-22
There's a tradeoff to be made, and the decision made by the submitter is far off the mark.
---
The problem usually isn't picky reviewers or sloppy submitters, it's the lack of clear contribution guidelines.
Here are some solutions I've seen in various teams:
For one-off reviews, set the expectation: The developer is asking for feedback, and not permission. He doesn't need to justify to the reviewer why he didn't incorporate all the changes. Don't give people pointless authority over others. This will solve most of the problems in the submission.
For more structured/important reviews, a third party moderator decides on disputed code changes. If you can't have this, then insist on > 1 reviewers and state you'll only make changes if the reviewers have consensus. This often solves style issues.
Changes I would like to see, but have not seen:
Reviewers should not ask questions - they should state concerns. So no "Why didn't you do this via X?". Instead, say "I think doing it via X instead of Y is better because ... ".
And definitely banned: "Why did you solve it this way?"
Asking questions is often the best part of review IME, at least when the overall review culture is healthy and productive. If nothing else, it’s often a good opportunity to add clarifying comments where something isn’t obvious after it was written. And authors receptive to good review, again IME, often agree that added clarity will help those cases.
“Why did you solve it this way?” is also an excellent question in that context. Not for the big “overall approach”, where the common refrain is that design should come before implementation. But sometimes it helps surface smaller areas of complexity that only become obvious in the trenches, and it can be incredibly helpful to socialize those once discovered. (Also often prompting addition of clarifying comments.)
Asking questions is great if you're genuinely confused or need clarification. But if you have a concern, express it with the question so there is context.
> “Why did you solve it this way?”
"Because it solved the problem."
Why are you asking me this question if you have no concerns? If you have some, why aren't you sharing them with me? If you're not going to, then I will give the minimally correct answer to your question.
Conversations are a 2-way street. If you want me to spend a lot of time explaining it, then reciprocate! Anyone can ask open ended questions. It takes a lot more work to answer them than to ask. Make it worth my time to answer them!
Forcing reviewers to put an effort in the discussions also solves a lot of problems in this submission. People aren't going to waste a lot of their own time playing these kinds of games. Make the barrier high enough to prevent 90% of bad actors from acting.
This all seems very situational to me, starting with level of trust/team cohesion/etc. Like, if I’m on a team that’s functioning poorly together, I almost certainly agree with you without any further nuance.
If I’m on a team that’s functioning well, however—taking the example of “why did you solve it this way?”—I might not always know why I’m being asked, but merely being asked is often good enough for me to come up with some hypotheses. If it’s not, there’s a good chance it’s a learning opportunity: maybe something is missing from my toolbelt, or perhaps some unintended consequence escaped my notice. Being prompted to think about those possibilities without bias may help me learn (or reinforce what I’ll learn) by putting me in a position to retrace my steps and look for other gaps in my understanding.
I say all this also acknowledging that I tend to put a lot more effort into review, and communication in a review context, than many of my peers have throughout my career. And sometimes my tendency to over communicate has had the opposite effect from what I intended.
In any case, I am definitely more inclined toward your position in context of other communication challenges!
> I might not always know why I’m being asked, but merely being asked is often good enough for me to come up with some hypotheses. If it’s not, there’s a good chance it’s a learning opportunity: maybe something is missing from my toolbelt, or perhaps some unintended consequence escaped my notice. Being prompted to think about those possibilities without bias may help me learn (or reinforce what I’ll learn) by putting me in a position to retrace my steps and look for other gaps in my understanding.
I'm not at all disagreeing that these things may happen, with emphasis on may. It can all be achieved by the code reviewer saying "Interesting approach. You could have solved it via X instead, which has these extra benefits. Was there a reason you picked Y over X?"
This is a 2-way conversation. In your paragraph above, you're not giving serious enough consideration that it may really be the reviewer who needs to learn something new, and therefore the conversation should reflect that symmetry.
I've often seen (yes, in low functioning teams), the following pattern:
Reviewer: Why did you solve the problem this way (X)?
Me: Because of (outline a simple reason).
Reviewer: But why not Y?
Me: Why Y?
Reviewer: (50% of the time does not give a good answer and merely insists Y is better because of some random article on the Internet).
Note that:
1. The reviewer can't articulate why his method is superior.
2. The developer had to put in a lot more effort in this exchange than the reviewer.
In a better functioning team, it goes like:
Reviewer: Why did you solve the problem this way (X)?
Me: It worked. Why do you ask?
Reviewer: Oh, I thought X would be better. (If I'm lucky he'll explain why).
Me: OK. Moving on...?
Reviewer: (Moves on or sticks to the topic but gives more context).
Both of these are "poor" interactions, compared to the reviewer simply stating up front what his concerns are.
> He doesn't need to justify to the reviewer why he didn't incorporate all the changes. Don't give people pointless authority over others. This will solve most of the problems in the submission.
IMO this is a sign of an unhealthy/mistrusting team.
Teams often require code to be reviewed, so, yes, you _are_ asking for permission. If a reviewer offers feedback that you feel is unimportant, you can always ask someone else for approval.
Going behind someone's back like that is obviously unhealthy, so you should instead have a discussion with the reviewer about why something is/isn't important. If you feel like you can't have these conversations or the reviewer isn't taking your thoughts into account, then that's a sign that you aren't aligned.
> Teams often require code to be reviewed, so, yes, you _are_ asking for permission.
You conveniently left out:
> For one-off reviews, set the expectation: The developer is asking for feedback, and not permission.
I'm not speaking about what code reviews are, but what they should be. I've personally seen this policy work. It's not appropriate for all kinds of projects, though.
Furthermore, a nitpick: Requiring a code review does not equate to permission. It just means people are required to solicit feedback.
> If a reviewer offers feedback that you feel is unimportant, you can always ask someone else for approval.
If the team allows for this, then I find it to be totally acceptable. To me, it's the same as:
"a third party moderator decides on disputed code changes. If you can't have this, then insist on > 1 reviewers and state you'll only make changes if the reviewers have consensus."
> Going behind someone's back like that is obviously unhealthy, so you should instead have a discussion with the reviewer about why something is/isn't important.
Agreed, and a solution is:
1. Make code review discussions transparent - at least to management (e.g. visible via a code review tool). This limits the potential for abusive teammates.
2. Don't make them go behind the reviewer's back. Simply invite additional reviewers (or escalate to the moderator). Do it openly so the reviewer is aware.
I never said "Don't have conversations". Conversations are necessary. I'm saying "Don't give the reviewer undue influence without proper checks and balances". If he is engaging in the tactics in the submission, ensure your team culture is such that this is open and can be handled. You can see in the other comments how common these problems are. Expecting most teams to be high trust and well functioning is unrealistic. Simple rules, however, can shift the team closer to that goal.
> If you feel like you can't have these conversations or the reviewer isn't taking your thoughts into account, then that's a sign that you aren't aligned.
Or (and more common), it's a sign that the reviewer is a poor reviewer and/or abusing his power.
Trust is a continuum. After all, the existence of code reviews is merely a formalization of distrust :-)
I think the policies and culture around code review will vary based on the situation (new hire vs seasoned developer, large vs small team size, minor vs major code change, etc). However, I don't think I advocated anything that would be a bad idea even for high trust teams.
The possible exception would be that a code review "not be a request for permission". If you have structured it as a gatekeeping mechanism, simply ensure a "referee" or "tie breaker" to smooth/hasten the process. Even with high trust/functioning teams you'll occasionally get pointless ratholing in the middle of a review, and both the reviewer and the developer should have a way to cross that divide. And no, a "be mature and discuss it through" will not work every time.
As for stating concerns - when is that ever a bad idea? I didn't come up with it, BTW. I learned it from reading books on effective communications. It's a good rule to live for everyday conversation.
I get annoyed when someone ignores my feedback and requests a review from someone else who is known to approve anything. I think doing this is fairly disrespectful and circumvents the point of code review.
Your comment is a good example of what I'm saying:
The problem is not the developer, but the guy who approves everything.
A team that requires code reviews, and has a guy who approves everything, is an unhealthy team.
The developer is asking for a code review not because he wants feedback, but because it is a ritual he has to go through because of team policy. If the team policy changed to "code reviews are for feedback, not permission", then he wouldn't waste your time. He's not the problem - the team's culture/policies is. If the focus was feedback, then you would only get requests from people who want your feedback.
> I get annoyed when someone ignores my feedback
Are you annoyed that he circumvented the purpose of the code review, or that he did not act on your feedback? If the latter, I strongly urge you to change. Acting on feedback should always be optional.
Of course, making code reviews completely optional is a luxury most teams do not have. But one of the reasons they don't have that luxury is that you'll rarely get a high trust team.
Despite this, one can put mitigating protocols with mandatory code reviews to solve some of the problems you're highlighting. If all the code reviews are via Github, and someone else approves the merge when you've left strong feedback suggesting otherwise, the team really needs to have a protocol for handling such scenarios.
I see, I think I'm better understanding what you're getting at, and I think I agree.
> Are you annoyed that he circumvented the purpose of the code review, or that he did not act on your feedback? If the latter, I strongly urge you to change. Acting on feedback should always be optional.
This has only happened to me with a couple of individuals. Usually in these cases I'll leave a comment and it will just never be addressed; the author will request another reviewer and my comment is ignored.
I don't hold PRs hostage and if things go more than one or two rounds of back-and-forth then I'll almost always yield to the author.
I still haven't encountered an environment where PRs aren't either essentially a rubber-stamping because an external audit said they needed to do code reviews, or the death-by-a-thousand-cuts as described in the article.
I'm sure they exist in the working world, but 5 years in across 4 different companies I've yet to encounter it (beyond what I've tried to provide to others in my code reviews on their PRs).
There are two ways I've found success where I've come into a team with these issues:
1. Differentiate critical comments vs opinions/nice to haves. Not every comment needs to be resolved as part of that specific PR, they can be addressed in a follow up after it is merged in a disciplined team.
2. Be okay with reviewers contributing to the PR themselves. Some developers just get too caught up on 'ownership' of code that makes it in, when the team is responsible for it in the end(when you have a blameless/improvement focused culture at least). I don't need the original branch owner to fix a typo or nitpicks when it will take me less time to fix it. I haven't found any code suggestion features that are amazing for this, so sometimes you'll end up with back/forth disagreement and have to rebase a commit out, but most of the time it's fine and overall improves velocity. Basically turning the PR into async pair programming.
Fair. I guess I've found that in my encounters, personally doing everything proper for other's code reviews hasn't inspired change in the other developers to reciprocate.
i.e. I haven't walked into an environment with a lacking PR culture and managed to effect a cultural change (yet). Maybe seniority is a prerequisite?
It’s big, particularly the second point. I fix typos and obvious mistakes while reading the code and I leave „feel free to punt” on any comment which shouldn’t block release.
Still, bikeshedding occurs but to a lesser degree and over more substantial issues, like overall module design.
First, part of the benefit of peer review is that two people have seen the code and have familiarity. Even if it's rubber stamping, if someone has actually looked at it, LGTM might be enough.
Second, even if PR only catches something 10% of the time, it catches the most obvious bugs.
When I was an - intern 30 years ago - the senior developer working with me had to decide if my code was worth a "formal review" or if just putting it in code review was enough. We went to code review, but as part of this I did hear about a process where code reviews are useful that apparently the company did use for important/difficult code.
In a formal review you print off the code, then put 5-10 other developers in a meeting room to look at it together (no computers in this room, though sometimes you would go back and print off some more code for context). You could do about 10 lines of code per hour this way, and the developers burned our after at most 2 hours per day, so this takes a very long time. However reports were this resulted in the most useful reviews/fixes to the code.
I've never seen the above done in practice, just heard about it. Someday I'd like to see it, but I doubt I ever will.
That sounds like a Fagan review. I haven't been involved with one either, but I think some of the forces that drove people to do them have been weakened.
When you are shipping code that you won't be able to update easily (i.e. cheaply), you try to remove as many defects as early in the process as possible. CI/CD and web delivery seem like they would blunt the motivation to go through a Fagan review for most software developed these days.
I would imagine that some form of this still happens in safety-critical systems (or more likely the state of the art has advanced from this).
Every now and then I can provide good feedback when I pull down the branch in question, test it locally, and try to think of a way to break it. Rarely do I have that kind of time.
A code review is not about testing the functionality or finding bugs. It is about sanity checking the design (although in a welk performing team the design should never come as a surprise), complexity, readability, and test coverage.
You can think of code reviews as an opportunity for knowledge transfer.
... and it has the added benefit that the person it's directed to will actually read your feedback since it would be directly related to work they're doing.
If you find yourself ping-ponging back and forth on the review, waiting 24h for a review that you fix in five minutes and then wait another 24h for re-review, you should treat it like any other asynchronous process that you find yourself wishing was synchronous - stop, get both you and the reviewer in a room, get to approval within the room, or at least to a point where it's clear you have more than five minutes worth of work to get to approval.
Just because your code forge has a code review feature doesn't mean that all your code review behavior must be tracked in it.
Sometimes the issues can come down to miscommunication, which I've found is well addressed by adopting a format for your PR comments to eliminate the ambiguity on what is blocking, what is an optional suggestion, what is a genuine question, etc., such as:
Let's think about writing for a moment... Many forms of writing heavily benefit from having a clear goal to start, followed by an outline, followed by many iterations of the content.
One analogous challenge with code reviews occurs when the reviewer can't quickly see something like a (a) goal and (b) outline. Without these two, it can make it hard to "navigate" what kind of feedback would be helpful.
a. In many cases, the reviewer and reviewee might think they have the same goal, but this is often subtly or blaringly wrong. Even people in the same organization can serve different masters.
b. Different software cultures have different ideas of planning, outlines, design documents, whatever. Some cultures wing it. Some people keep this kind of thing in their head, but it is hard to quickly share your head with someone else. Without some notion of the contours of the problem, how can you properly assess the the code?
I think the list os fine but if those things happen in reviews a lot, I would argue you either have a cultural problem or simply have programmers that either don't pay attention or setup guardrails for eachother, for not submitting things that would get caught up in a review like this.
Every decent IDE has an option for setting up some kind of configuration for how code should be named and structured .
I think code reviews should be somewhat pedantic, it should be done with quality in mind and with the awareness of how the future changes.
The list of patterns here are valid but if they tend to happen more than feels comfortable for the team, look at the culture of how you work.
A style-guide, linter, and formatter eliminates so many potential nits, but in my experience the teams that need them the most are often the most difficult/impossible to get any buy-in on.
Code reviews can significantly sped up by pair programming a lot!
When you wrote the code together, a lot of these doesn't even come up, because you discussed everything already. Sometimes after pair programming, even no code review is needed!
Often I find reviews are useful precisely because of the distance the reviewer has to the code; a fresh set of eyes is good to catch issues. (Frequently I'd go over my own changes before submitting for review for the same reason).
Pair/group programming also often ends up lacking in notes on what the thing was actually meant to be doing (because this was communicated verbally and never written down in reviews), which makes code archeology difficult. Or maybe that's just derived from the (external) code base I had to deal with that was done that way and doesn't generalize…
I think you're joking but there was a company in my area that famously stuck by their "mob programming" approach where 5+ developers would sit in a room, 1 would drive, and the rest would fill out an adjoining document.
Trying to apply heuristics like patterns to this problem is attacking the wrong thing. It boils down to a lack of belonging, trust and motivation of reviewers to do the right thing. Absentee leadership not holding engineers accountable for their inaction or slow walking of reviews being the enabler.
This kind of dysfunction happens more often on teams with incompetent people in control but sometimes temporarily happens on otherwise high functioning teams when multiple people are fighting over philosophical control of the project.
I'm surprised there aren't more studies in how code going into the kernel is reviewed. There are various subsystems that have different levels of review. And it is largely done "in public" for the final set of changes. Heck, many of the tools we take for granted now were sharpened on the kernel.
I’d like to see a system where you don’t review patches, you fix them, and then you send it back and have them review the improved patch. The game ends when someone accepts the patch as-is.
But maybe it would only work for peers on the same project.
The author and the reviewer universally want "good code" but how often is that defined for any project? Clear definitions of what is important to your team and your code base would alleviate a lot of these antipatterns.
behind a lot of these are power games made particularly perverse bc the participants tend to believe that they are above power games via dedication-to-the-truth or something
the more knotted and unaccepted the actual hierarchy vs the perceived hierarchy, the more of this kind of behavior. the more people feel respected the less they have to force behavior from others to make up for the feeling of lack of respect.
particularly toxic are places where a local tyrant believes that they deserve a lot more respect than they actually do (e.g. expert beginnerism)
Wow, that's a nostalgic domain name. I could have sworn I saw pieces by other authors on chiark.greenend.org.uk decades ago, perhaps even before the Internet became widely commercialized.
now I’m convinced that a certain senior engineer at a very well known tech company had this as an instruction manual. I’ve experienced each of these multiple times from the same person!
On my own though, I’m certainly guilty of the 100 round trips. I tend to mentally group nits and not spell out every one that I find
The Nitpicker's Gambit: The reviewer fixates on trivial style issues like whitespace, bracket placement, variable naming conventions etc. They make the developer conform perfectly to their preferred style, even if it's not specified in the team's coding guidelines.
The Silent Treatment: The reviewer provides no feedback at all for a long time after the review is requested, but does respond just often enough to keep the review "active". The author has to ping several times to get any response.
The Tunnel Vision: The reviewer only looks at the specific lines changed, without considering the broader context of the code. They suggest changes that are locally valid but deliberately inconsistent with the overall design or architecture.
The Ad Hominem: The reviewer makes snarky comments about the code in a way that implies the author is inexperienced and/or incompetent without directly saying so (and opening themselves to accusations of meanness).
The Philosophical Debate: The reviewer gets into a long back-and-forth debate in the review comments about a matter of opinion like whether inheritance or composition is better. The actual issue at hand gets lost in the abstract discussion.
> The Nitpicker's Gambit: The reviewer fixates on trivial style issues like whitespace, bracket placement, variable naming conventions etc. They make the developer conform perfectly to their preferred style, even if it's not specified in the team's coding guidelines.
Mostly solved by autoformatters. If your team refuses to use one, start collecting metrics on time wasted in those discussions, and present to management.
> The Silent Treatment: The reviewer provides no feedback at all for a long time after the review is requested, but does respond just often enough to keep the review "active". The author has to ping several times to get any response.
Set a policy of auto-approve in N days if no engagement.
For me there’s only one thing that matters - you’re all working together to put good code into the system.
If you work in an environment where people lord code review over you in any way then you’re on a hiding to nowhere. That applies equally if you expect reviewers to not, you know, review your code.
I see this discussion of “nitpicks” constantly. If it’s a tiny thing that brings the code in line with the existing codebase then just do it rather than fighting it. If there’s some ambiguity then find a rule as a team and apply it.
I have no idea how people work in environments where you’re not all working together.
The commentary about “too big, make smaller!” vs “missing context, make bigger!” rings so true. It’s obviously a complex change either way, just pick one way and stick to it for that change.
Not to be confused with the PR being “too big” because it is not atomic— that’s a different topic, something to teach the more junior teammates.
> In your first code review passes, pick small and simple nits. Variable names are a bit unclear; comments have typos in.
> Wait for the developer to fix those, and then drop your bombshell: there’s a much more fundamental problem with the patch, that needs a chunk of it to be completely rewritten – which means throwing away a lot of the nitpick fixes you’ve already made the developer do in that part of the patch.
I've done this more than once, and felt pretty bad about it each time. The problem is that I'm pretty good at spotting the little issues while I'm reading through some code. But I'm not as good at identifying the larger design issues until I'm thinking about it later