Hacker News new | past | comments | ask | show | jobs | submit login
How to Make Your Code Reviewer Fall in Love with You (mtlynch.io)
472 points by kiyanwang on Dec 7, 2020 | hide | past | favorite | 169 comments



> The golden rule: value your reviewer’s time

This goes the other way around too. I've had some PR's that put copious amounts of time in to make them as complete and perfect as I could for the reviewer. Only to get them turned down for style reasons.

If style is that important, make sure you document the rules or automate it (3. Automate the easy stuff). So contributors have a chance at figuring this out for themselves, without having to go back and forth. If the style comes down to personal preference you can apply rule 12. Award all ties to your reviewer, in reverse too.

Nothing takes my motivation for contributing away as a reviewer that just must find something to complain about the code.


I honestly love that Go has a formatter included, and the style is specified. Similarly with black[0] for Python. I used to care terribly about style, until I realized I just care about consistency and readability. Otherwise I don't give a shit, and having a tool available to everyone makes such a difference. This goes for other tools too, like static checkers, linters, even CI. I've been on projects where anyone could run the whole CI pipeline (which included the aforementioned static checkers, unit tests, even a doxygen run to check for undocumented things) on their dev machine before submitting, and not waste anyone's time with things better done by a computer. Wish every project could be like that.

[0] - https://black.readthedocs.io/en/stable/


> I used to care terribly about style, until I realized I just care about consistency and readability. Otherwise I don't give a shit, and having a tool available to everyone makes such a difference.

I had the same thing. Also consistency and automation is what's really important, as it extinguishes every need for style discussions. Because you can trivially convert from the project style to personal styles and vice versa using things like Git attributes[0], as long as it is consistent.

[0] https://git-scm.com/docs/gitattributes


yeah, same with me. That's why I love prettier. I care more about just having a pattern that's applied automatically, than I care about personal preferences.


For JS/TS/etc, Prettier: https://prettier.io/

I don't agree with all the formatting, but I also just don't care enough. It's sooooo nice running `yarn format` on a huge legacy codebase after adding prettier.


ughh ... I've never seen HTML code uglier or harder to read than what came out of prettier on the code bases we have that it is used on.


Consistency for consistency's sake is not a virtue.

The best solution to all the style discussion nonsense is an automated, mandated style. The better solution is the team being mature enough to only care about real problems and not gate development with personal preferences. There's something to be said for feeling comfortable in your space. The company's codebase is not your space.


Consistency for readability is a virtue; did you miss that part?

And it's not about "gating" development with "personal preferences", what a strawman. If someone can't be bothered to run the formatting tool included in the repo on their code before submitting, then I can't be bothered to review their submission.

> The best solution to all the style discussion nonsense is an automated, mandated style.

Yes, exactly what I said. Again, did you miss that?


Consistency doesn't always serve readability.

In written English, you can convey the same basic facts with a hundred different sentences. A good author will choose the one that best conveys emphasis and subtle gradations of meaning to the attentive reader.

Code is also written for humans to read, so the best written code will use all the available facilities of the language to convey meaning to the reader.


Exactly Like english or any written language, generally books are typesetted against a bunch of general conventions. You dont choose if you get one or two spaces after a semicolon. Your book is not great because you wrote it in leetcase, you focus on the meaning, not the details.


You really do get to choose if you want 1 or 2 spaces or whatever formatting you want. Certain publishing houses might have style guides that they follow strictly and others less so, but all this just makes the formatting for a book a perfect metaphor to formatting code.


You don't. The consistency is within the same book (or maybe publisher). A book would look ridiculous if every paragraph was using a different number of spaces after punctuation.

But between two books its just personal preference whether you agree with the choice the author/publisher made.


I'm curious to see an example of how this plays out in code (not comments).


Using python, the choice between a list comprehension or a for loop is often a style choice.

I can't say this is common practice, but a list comprehension is great when the iteration is simple and quickly becomes hard for a human to parse if you're doing anything slightly complex. If you need any complex logic, you have to use a for-loop. So -- when written by a developer with these perspectives -- a for-loop signals complicated iteration logic and a list comprehension signals simple iteration logic. But you could do either with both if you chose. So, it's a style choice that can be used to convey meaning.


I failed a google interview for using a nested generator comprehension instead of for loops. It solved the problem, but the interviewer was utterly disgusted by it. He had some fair points about it being hard to inspect and modify and add exceptions.


I agree with automating it, but I also have seen automated styles going too far, more often than not.

Automated styling should be kept minimal, like checking names and indentation. Absolute rules does not work because then you cannot deviate from it even if it is appropriate in the right situation.

The standard JavaScript checkers nowadays are ridiculously strict and forces me to spend time writing a less readable code. For example yesterday my code was invalid because I did not write properties of an object alphabetically. So I had to convert this:

    {
        host: ,
        port: ,

        key: ,

        username: ,
        password: ,
    }
Into this mess:

    {
        host: ,
        key: ,
        password: ,
        port: ,
        username: ,
    }
With tslint I cannot even execute my code if the formatting is invalid. So if I write a console.log or forget to add a space between the slash and the code I commented, the compilation crashes.


It is consistent though, do you have a better generic system for grouping object keys than alphabetization? It’s pretty handy to approach a codebase knowing that it has enforced this convention.


The point is that it's not generic: there is meaning conveyed by the grouping that can't be expressed as an easy script (which is why there's a human writing the code in the first place).


My point is that having rules on this kind of detail is more harmful than beneficial, so there should not be one in the first place.


Maybe that should not be an object then, but a URL or just several objects.


This is a very common pattern for a lot of third party packages.

And it is just an example to show that there are perfectly legitimate cases that are more readable when not alphabetically sorted.


> The company's codebase is not your space

That’s one possible way to look at “the company’s code.” Another one is that you’re in “that” code six (or more) hours a day and it’s usually written by someone else. So if everyone codes to _their_ comfort, there’s a likelihood that no one is comfortable in the code ever. I find code style standards that a team as a whole has compromised upon (aka consistency) a reasonable approach to “average comfort.”


The fact that gofmt removes whitespace between arithmetic operators drives me insane. Fortunately, rustfmt and black don't do that, and I've been largely able to avoid writing Go for the last year now.


How I managed this on a team:

If there is a style disagreement, documented style rules prevail. If there is a documented style rule, follow it. If there is not a documented style rule, you cannot block this PR with the style objection.

This avoids really annoying issues of "I don't like how this is written [for style reasons]" in PR feedback. I'm not a mind reader, I don't whether you prefer for..in over Array.foreach or whatever, my PR shouldn't be blocked because of that. If you want new rule, go propose one and we can discuss & vote on it later but it doesn't apply to this PR.

This avoids the issue of "style rules are determined by whoever has the greatest capacity for argument & is willing to block PRs over it." It goes to a discussion & vote and everyone can agree once then stop talking about it. :)


I have a rule that if stylistic decisions matter enough to reject a PR over, they also have to be enforced by a linter or code formatter. If they are that big a deal, we should enforce them via automation, not review. And if they're not that big a deal, do not put them in a code review.


It baffles me that this isn't the consensus.

I've waged wars over documenting styling rules (some too complicated to be covered by a formatter - which by itself is a red flag) because I know that people often forget to enforce rules that they themselves have come up with.


> Nothing takes my motivation for contributing away as a reviewer that just must find something to complain about the code.

Or the reviewer that always asks for more tests. (Once I got asked to write a test that was unnecessary and almost impossible to write a test for. Think timing issues. My options were to push back or spend a significant amount of time writing a test that was not needed. As a junior that already pushed back more than a typical junior I made the best choice I thought I could make at the time.)


What really helps in the case of testing is lead by example. If the project itself provides a good test framework and examples of properly implemented tests it is easy for contributors to add tests as well. But asking people to figure this stuff out themselves is often a really big hurdle. Also parametrized tests. I find no shame in asking people to add a case to a existing parameterized testsuite when submitting an issue.


Yeah... I have a lot of that discussion with my team, too.

It seems people (reviewers) need to find something or they think they are not doing good job.

Another irritating practice is treating opinions about style as commands -- basically commanding the author to change the style of the code to conform to the reviewer.

It should be understood that every problem has many potentially correct solutions. The job of the reviewer isn't to assure the best correct solution or the solution they like, it is to verify that the one selected by author is in the set of correct solutions. If this makes sense...


Exactly. Nothing I love more than working hard to make sure my PR meets the personal style of the colleague whom I expect will review it, only to have to have it reviewed by another colleague, with a mutually exclusive style, meaning I have to do it all over again, and then that reviewer goes on vacation and I get grilled by the expected reviewer on why I did it "that way". Sigh.


This list is awesome, i love everything except:

12. Award all ties to your reviewer

A true "tie" is really rare IMO. If it really is a tie, we should all should agree that person putting work into it should be able to decide their own code. Not only does it cut down on discussion, but it cuts down on work and communication needed to get the PR past the finish line.

Allowing ties to go to the reviewer can lead to all kinds of discussions about meaningless stuff, like:

``` We should use forEach(...) instead of for loops, because that is how a most of other code is written. ```

In that example (from a real code review), the code is practically the same, just different style. So now the coder has to go back, re-write, re-test, signal for a re-review, re-discuss etc... All for something that is a tie, and doesn't actually matter either way.

Alternatively, letting non-issues through without a duel every time, saves a bunch of time and energy on every ones part.


This is one spot where I strongly prefer the standard from the Google guidelines[1]. "Reviewers should not require the author to polish every tiny piece of a CL before granting approval. Rather, the reviewer should balance out the need to make forward progress compared to the importance of the changes they are suggesting." Getting caught up in all the little details rarely serves this purpose; more likely it's being done in the service of something anti-productive like a personality conflict.

[1]: https://google.github.io/eng-practices/review/reviewer/stand...


Moreover, Google’s style guides contain a very specific clause at the beginning. Here’s the example for Java.

> This document serves as the complete definition of Google's coding standards for source code in the Java™ Programming Language. A Java source file is described as being in Google Style if and only if it adheres to the rules herein.

https://google.github.io/styleguide/javaguide.html

Google requires all code to go through code review, so to avoid time being wasted by style disputes or reviews being sent back for style-only changes, style disputes are essentially banned. Anything that can be enforced by an auto-formatter or linter is so enforced. For anything else reviewers are expected to be able to cite the style guide to back up their requested change.

This practice seems to have spread into the community due to the popularity of Gofmt, and now other languages like Rust have a community-standardized formatter (rustfmt) and linter (Clippy). Many thousands of hours have probably been saved by this point by treating format akin to language syntax.


That's the ideal, but there's a kind of magical "nits of the gaps" effect where some folks will find gaps in what can be automatically handled, and zero in on them.

I once had a colleague who loved to nitpick variable names. Should that flag be named "continue" or "shouldContinue"? Or maybe the i variable should be renamed to index. Or maybe "row" and "column" should be renamed to "i" and "j". Sometimes "buffer" should be renamed to "data" and sometimes "data" should be renamed to "buffer".

It does not make sense to say that variable names don't matter just because there's no way to automate a style guide on the semantic content of variable names. Sometimes variable names matter quite a lot.

So interpret that particular part of their code review guide as being less about the value of automation, and more about how to guard against some of these social headaches. A culture of breaking ties toward the requester discourages nitpicking. A culture of breaking ties toward the reviewer encourages it. Even in the presence of linters.


The two most difficult things in programming.. caching and naming


The two most difficult things in programming: caching, naming and off-by-one errors.


The three most difficult things are naming, cache invalidation, off-by-one errors.

And latency.


Latency is not difficult.

It is just lack of knowledge or focus to understand the total execution path and remove unnecessary elements from it.

I have worked in algorithmic trading where I did research and I designed framework to respond to stock exchange messages within 5 microseconds, consistently.

5 microseconds is difficult and requires some special techniques but 100 microseconds is quite easy and requires just understanding what is execution path and removing all unnecessary operations from it.

If you look at contemporary developers and web frameworks it doesn't seem anybody puts particular care into understanding those things and only try to twiddle couple of things and see how that transfers to performance.


Latency is (where I am used to it) a hard physical limit imposed on the upper end by the speed of light. When you have a system that literally spans a planet, there's only so much clever algorithms (and hardware) can do to speed up the time difference between "things happen in one end" to "things happen on the other".


Yeah, it is hard problem when you are hitting physics.

But then it really isn't a problem -- it is a limitation. Nothing you can do much about.

There are of course types of applications where latency is really tough topic. For one, low-latency trading is where companies are in arms race against other low-latency market players and there is always improvements form improving your product's latency.

But in a typical situation like websites and corporate services, the latencies required are so large that it is quite easy to reach them -- just don't do anything stupid.


My last boss had a problem with this. Guy literally told me to just accept any change suggested to ease the review process and it's not worth debating any comments. We had very different opinions.

He always claimed he was "not nit-picky", but then he let reviews turn into exactly what you said, while loops versus for loops and what-not, and allowed the rest of the team to comment on low-value things like style rather than answering the important questions.

Why pushback at using a built-in function that does the same thing as some little helper your team wrote 8 years ago, when your helper function doesn't even handle special cases? There's literally no point other than someone has NIH syndrome. Why are the source files all 5,000 lines long and not broken into a logical project structure? Why is the back-end data-gathering and transforming code inter-mangled with the front-end templates (Django) so far as to be written as template filters and template functions?

It did not take me long to realize the development culture there was not aligned with my values. The lead developers came up in a different world and refused to see the positives and benefits of any modern development workflow involving a separate back-end API and front-end application.


I was going to say something similar.

When I'm the reviewer, I try to decided for myself whether something is truly important, or just a style choice. If it's the latter, I will comment on it, but then add "Your call." to make it clear that I expect ties to go to the author.


This should be included in a "How to Make Your Code Reviewee Fall in Love with You" companion article, if one doesn't exist yet.


I wrote a list of code review suggestions a few years ago that focus on things the reviewer should do:

https://mtlynch.io/human-code-reviews-1/


I wrote a "what makes for a good change" doc a while ago, that touches on this.

https://github.com/vatine/sressays/blob/master/change-reques...


My preferred approach is: the ties are awarded to the individual whose name will show up in `git blame` after this change is merged.


> agree that person putting work into it should be able to decide their own code.

I take the opposite of this approach. If my reviewer felt it was worth their time to put in the work to provide feedback, then I accept their opinion. 99.99% pr feedback is not worth escalating to a ping-pong or face to face meeting.


I mean, consistent code style is just good practice, so maybe that's not a great example.


I don't think I've ever seen style so divergent that it actually hampered my ability to change the code. Add a tool to automatically apply some stylistic choices if you want, but as soon as it becomes extra work to adhere to a specific code style, it quickly deteriorates to causing more work than it saves.


Yeah we have tools where I work that deals with 90% of style errors, and reviewers call out the other 10%. After a few weeks it's all second nature so you don't have to do much to adhere to the guidelines and just do it while writing the code in the first place.

To me ignoring style is kind of like slipping a PR/diff/cl through without unit tests. Sure, that individual CL won't be the downfall of the codebase, but repeated offenses will destroy the codebase without an easy way to fix it all. Of course, without unit tests you end up with an unmaintainable fragile codebase, and without style you end up with an ugly, hard to read codebase.


It's exactly the same: if I think a style change is going to lead to "destroying the codebase" with repeated offences (rare - and should've been a transparent autofix if so) I'll call it out, and the same holds for a unit test. If you have a good reason to skip a unit test (e.g. isolated all stateless code, where the remaining code is almost trivially correct but hard to test) then that's fine (if it's noted as such), but otherwise I'll call it out.

It's just that skipping unit tests without good reason is far more likely to "destroy the codebase" (or at least to cause more work down the road than it saves now, without there being a good reason to skip that work now) than not adhering to some style guide will, IME.


Agree; rule #1 of reviewing in my opinion is "trust your colleague whose code you are reviewing".


This is an amazing list. I just want to point out that there's an implicit requirement in this list, which is that this is optimized for reviewers who are willing to give you quick feedback.

At a previous job, I worked on a team where we iterated towards the list that OP provides. It worked really well, and it worked because we had a culture of providing rapid reviews. The senior people on our team even thought that reviews were important enough to drop their own work. A lot about the team was suboptimal, but I was spoiled by getting such rapid feedback for my work.

We had some partner teams that really challenged this. One was halfway around the world, so you were guaranteed to only get one review round per day. If you wanted to make progress while working with them, you needed to either send them huge changes or send them a bunch of changes at once. But structural feedback in an early change would cause you to need to rewrite the later changes. We broke this cycle by having them eventually give local reviewers stronger conditional approval in the integration path, which fixed our particular issue but showed that there were cracks in how we optimized our code review flow.

The real trouble came from a local team, which was an overloaded platform team that we were built on. No matter what you did, they would only review your changes about once per week. Their feedback was super insightful, too, and they often caught very subtle misuses of their platform that would cause subtle user-visible bugs. You couldn't just skip their feedback. To try to manage their flow, they wanted teams to review pull requests locally before sending changes to them. So people on my team would write a change for this platform, which we'd review on our team. Our team would apply our rules and break the change up. We'd send the first one out. Then it would take a full month to get the entire thing in, to account for the back-and-forth. If you needed to get something substantial in, it could take months. My manager fought for months for them to acknowledge that this was a problem and fix it, and they did eventually dig out of it by setting SLAs and increasing capacity.

I don't want to take away from this list, but I want to point out that some of it is a function of your reviewer's incentives. If your reviewers have different incentives, then you need to find your own list that works within those bounds.


I'm curious if a separate "design review" phase could have helped with your challenges with the team halfway around the world?

I worked at one company where, due to the nature of the business, you really couldn't reasonably expect other teams to do code review in a timely manner. But most inter-team code review challenges had to do with high-level design, interface protocols, stuff like that. And they had to do with tasks that we could usually see coming down the pipe weeks ahead of time. So we developed a culture of just finding a time that worked for everyone to call a meeting with one or two stakeholders from each affected team, in order to hash out those design decisions before starting on the actual implementation work. It would stay pretty high-level - if any concrete work product came out of it, it would be an ADR or two.

We did code review, too, of course. But agreeing on a plan ahead of time greatly reduced the risk of code review turning up problems that led to costly rework.


This is a good point. We did have a heavy up-front design phase for any projects between these teams, and it mitigated lots of problems before any code was written. This helped modify the conversations to a back-and-forth about specific implementation details instead of "you need to completely throw this out and do something else"


those are fascinating, occasionally horrifying cross team examples you give. i suppose i’m spoiled to work in a small company where reviews are always local.


Great post. This is the part I liked best:

>When your reviewer makes a suggestion, and you each have roughly equal evidence to support your position, defer to your reviewer. Between the two of you, they have a better perspective on what it’s like to read this code fresh.


Also:

Add a “How to Test” heading with steps to your PR templates.

This is not a replacement for automated tests or intended to shortcut a thorough review — it just saves a lot of time to give reviewers a list of setup and testing steps for anything they need to verify manually.


Before/after screenshots help too, if there's a visual change.


Nice post. I agree with almost all of the author's points. However, I would add that "reviewing one's own code" is more difficult than it may appears. It's like reviewing your own prose, one develops blind spots. That being said, combing through your own CR for trivial mistakes is just due diligence.

Also, 100% on the change list description. I mentioned the same thing in a similar post[1] of mine.

[1] https://blog.mattchung.me/2020/11/09/3-tips-on-getting-eye-b...


While true, sitting down and reading your code in the review tool as opposed to your editor or git client does help; shifting your mindset from author to reviewer also helps.

Back when I was still in a project that did code reviews, it really helped for me.

Similarly / a tip: `git add -p`, to view and review your changes one at a time. You can split up changes into separate commits as well using that technique.


Hard agree.

I spot things in `git add -p` that I had missed when writing the code in my IDE.

Then I spot more things in the github "Create PR" page that I missed in the previous step.

Then sometimes I spot more things in the created PR that I missed in the steps above.

Always skim back over your PRs, especially in the final context that they'll be reviewed in. Something about it puts you into "reviewer" mode and you'll get a better PR out of it.


What you've described indicates that while you may not agree, reviewing your own code helps identify issues.


I think you misread what they wrote:

> Hard agree.


But then they described agreement?

Edit: Damn, I misunderstood both the GP and you. My apologies.


This is what I do. Even that small change in perspective is often enough for me to catch quite a few stylistic or functional problems two which I was completely oblivious when looking at the code in the IDE. Even in the IDE's own diff viewer.


Naturally; if self-review were as effective as peer-review, we need not do the latter!

The implication here seems to be that one is only likely to be able to see 'trivial' mistakes in one's own work; that, unfortunately, tends to be a self-fulfilling prophesy, as, if you are focusing on issues like syntax, you are probably not giving so much attention to more analytic issues such as what implicit assumptions need to hold for the code to work. One thing, that is not uncommon in my experience, is for the early part of your work to be predicated on assumptions about what the as-yet unwritten parts will need or do, and some of these assumptions turn out to be wrong. There is some hope that, on revisiting the early work with an analytic eye, after completion if the remainder, you will see this to be so.

There are a couple of things that helped me improve how well I review my own work, both mentioned in the article: take a break between writing and reviewing, and don't make excuses (to yourself or publicly) or otherwise minimize problems found by the reviewer (the same goes for anything you found in testing.)


i will often read through my changes in the browser UI in a similar manner as my reviewers. more often than not, that change of perspective will highlight something silly i missed.

i realize now that i could help my reviewers by first submitting a PR to myself into the branch to be merged.


Fisheye/Crucible lets you view the code review before hitting "start review" to publish it to everyone else; I also give my reviews a look over in that context before publishing for the same reason, it makes anything odd stick out more because it's not the UI I've been looking at.


If you're on Github (or Github Enterprise) look for the "Create Draft PR" feature (it's in the button dropdown for the final "Create PR" step)


I've developed the habit of trying to look at my code (or UI) as if I had never seen it before. You can think of it as an empathy exercise.

One way is imagining you want to explain it to a coworker or new hire.

I find it enormously useful, and it has become second nature for me.


... just like you explain it to a... rubber duck? https://en.wikipedia.org/wiki/Rubber_duck_debugging


A bit like that, yes!

When I'm stuck I now sometimes write down an imaginary question to someone, and usually the act of explaining my problem makes the solution clearer.


How do you guys handle managing multiple merge/pull requests that depend on the previous one?

For example:

Request 1: install dependencies

Request 2: add scaffolding to application like adding a menu item, page layout, etc

Request 3: add feature A

Request 4: add feature B

My questions:

1. Do you add all 4 Request reviews at once, or do you wait until the previous one has been approved before adding the next one?

2. How do you communicate Request 2 is dependent on Request 1?

3. How do you enable the reviewer to review Request 4 without reviewing duplicate code (since Request 4 has merged Request 1, 2, and 3)?


4 CRs stacked on eachother in Gerrit. It's that one killer feature of Gerrit that you cannot live without in other systems once you realize how powerful it is. Phabricator's Differential also does this, but it's comparatively extremely janky.

You literally just push four stacked commits to refs/for/master, they end up as 4 separate Change Requests (equivalent to what others call Pull Requests, each with a separate review process), each one explicitly marked as being dependent on another, and showing a diff only against that particular other base CR as if it were merged onto the target branch already.

Practically this just means you push your current HEAD to refs/for/master, Gerrit will take care of automatically creating all CRs for you and print out their URLs. Done. Ready for review.

Then, to update any of them, just amend them locally (address code reviews, rebase stack on top of current master/main, ...), make sure they're still stacked on eachother in Git (as four commits), and push them all again to the same endpoint. Gerrit will automatically update all affected CRs. It will deduce the intended order and state of CRs based on the local Git history pushed, mapping all commits to existing CRs based on the Change-Id present in each commit (automatically generated by Gerrit's local git hook).

None of the hacks that other siblings comments mention are required. It just works.


I imagine Pijul could do this quite naturally, as it describes everything as causal chains/DAGs (as I understand it). E.g. You could publish this[1] patch alongside all of its dependencies, and the UI could gate it behind it's dependencies.

[1]: https://nest.pijul.com/pijul/pijul/changes/VO5OQW4W2656DIYYR...


In the entire history of Pijul, you probably picked the best example of how to get your reviewer stop being in love with you. That change fixed several hundred errors caused by the removal of a dependency.


That's hilarious, I have been reading about Pijul for a few days so I don't have any real experience with it yet.


If you want a better example, see how I manage two different version of my SSH library for Rust, Thrussh: https://nest.pijul.com/pijul/thrussh

The problem this is solving is that the Rust ecosystem is in the middle of transitioning between two version of its async base library, Tokio: most libraries have transitioned, but the most used HTTP library (Hyper) hasn't. This forces any user that wants both HTTP and another network protocol (SSH, for instance) to stay on the older version.

However, we can't all keep waiting on Hyper. Pijul's solution is to push the same bugfixes to two different channels, one of which includes an extra patch for transitioning to Tokio-0.3 (the newer version). Since the identity of patches is preserved, no rebase is needed, and conflicts solved once are solved forever (no rerere).


Seconded. This notion of "stacked diffs" is extremely powerful and I miss review systems that lack it. Currently using it for years now in Phabricator by following https://medium.com/@kurtisnusbaum/stacked-diffs-keeping-phab...


This is a hard problem and I usually try to avoid it, but when I have to do it I usually have them branch off each other so that I can merge the PRs in reverse order once they're approved (4 -> 3 -> 2 -> 1 -> main). The diffs are all based off the previous branch in the "stack" so they aren't duplicated.

It's really easy to mess up and merge the wrong PR first, and it's a pain in the ass to rebase all the way back up if you make a change in Request 1. I'd love to know how other people are handling this.


I'm using 'git rebase --onto one HEAD~<number of new commits in branch two>', repeat for 3 and 4.


Use branches. Branches are cheap in Git, and you can do PRs between branches so that the reviewers are only seeing the new changes in that branch.

In your example, I'd create a branch for R1, and publish a PR for R1 -> mainline when it's ready. Then I create a branch for R2 from the R1 branch, using R1 as the upstream.

When R2 is ready, I publish a PR for R2 -> R1, and create a branch for R3 to start working on it.

When there is feedback for R1, I switch to that branch and make those changes. I'll squash those changes back into the original commit(s) of R1, and update the PR.

Then you rebase R2 with the changes you made from R1, rebase R3 with the changes from R2, etc.

As long as the changes between each branch are not on the same lines of code, the rebases are painless and quick.

I name the branches with a key to support ordering, something like:

1_add_dependencies 2_add_scaffolding 3_feature_A 4_feature_B

This way they show up sorted when you list branches, and it's obvious what their dependency ordering is.


To add to the sibling comments, you should avoid this scenario in the first place by hiding things behind feature flags that only apply in dev.

Request 1 can go in fine on its own and reviewed separately

Request 2 can be added feature flagged.

Requests 3 and 4 would remove the respective flags for the scaffolding.


What about structural stuff that just can't be hidden behind a feature flag?

What do people do in those scenarios?


Ideally you want tooling that allows you to toggle between the diff between 2 and 3 and HEAD and 3, and to specify that 2 is the parent of 3.

Some folks use "stacked diffs" or "stacked commits" to do this. Others use feature branches, or a patch series.


Create request-1 targeting the default branch, request-2 targeting request-1, etc. Once request-1 is merged, I'd change the target of request-2 to the default branch and merge the branch back. This is under the assumption that e.g. you can feature flag the feature away, so that e.g. the scaffolded app doesn't show in production yet. An alternative would be to have a "temporary" feature branch that you target until you feel it's ready to be merged into the default branch.

By (temporarly) targeting the prev branch problem (3) is solved in most UIs.

The communication problem (2) is simply done by: This adds X and is scoped to X, look at request-Y to see how feature Y builds on it.

> Do you add all 4 Request reviews at once, or do you wait until the previous one has been approved before adding the next one?

This is the tricky bit, in your example I wouldn't go so far and create all MRs. I'd create one MR ahead at maximum, because if you need to change something based on the review that influences a later feature, you might end up doing work twice.

So maybe a timeline would look like this:

- (Skipping a dependency MR, because I feel one should introduce dependency when it is used and not beforehand)

- request-2: Scaffold application / feature, create an MR targeting default branch (request-2)

- request-3: Start working on feature A, once feature A is 80% done => Create an MR targeting request-1

- Ask for review on request-2 with sharing a link to request-3

- Continue working on request-3 while you wait on request-2 review

- Incorporate request-2 review feedback, get it merged

- Rebase request-3 on merged request-2 (or merge default branch back), resolve conflicts and put it into review

- Start working on request-4 based on the current request-2 branch.

- Rinse and repeat.

There are obvious intricacies, e.g. if legal needs to review dependencies, review times of your reviewers. I generally try to work with the same set of reviewers over a sensible time frame, rather than getting new ones for every MR relating to the same feature set. So check beforehand whether they are available for the foreseeable future.


Good post. For a longer, more prose-like read that raises the general same points and expand upon them, see Google's 'How to do a code review' [1].

[1] - https://google.github.io/eng-practices/review/reviewer/


Author here. Happy to answer any questions or take any feedback about this post.


Hey, meta comment since I found this post first through your meta post / tweets about trying to get this to the front page of HN.

It's great content, and exactly the kind of things I want to read here, but wondered if you had thoughts about how to solve the Goodhart's law[0] effect. e.g. in an ideal world for me (and I guess for you too), the best posts would more naturally rise to the top of places like HN that ostensibly act as quality filters.

I really like your posting advice in general as most of it is about 'how to improve quality', but some of it (choosing what time to post at etc) falls into 'how to improve my chances at this gambling like system'.

I have mixed feelings about this. Having a higher barrier to entry does lead to higher quality posts (and in one possible future, only people who have read the 'how to get to front page of HN' guide can get to the front page of HN). On the other hand, I've seen the problem of 'should we invest this dollar into producing a better quality Widget or into telling people more effectively that our Widgets are the best?' kill entire organizations when the see-saw tilts too much to the latter.

Maybe I'm relating some things too broadly and it's actually nothing more than 'hey I post high quality content and do things to help it succeed', but wondering if you've thought about this tradeoff at all / if you're worried about getting deeper into the metrics rabbit hole that you've discussed in some of your other posts.

[0] https://en.wikipedia.org/wiki/Goodhart%27s_law


Hey Gareth, good question!

It's unusual for me to try to "time" my submissions to Hacker News. When I published this post, I thought it would be interesting to live tweet my publication day process,[0] so I published sooner than I originally planned because it was a slow news day on HN. But it ended up not gaining traction on HN that day anyway, so even that optimization seemed to have no effect.

But overall, I think the HN filter works pretty well. I can't think of anything I do to get an edge aside from investing a lot more time into editing and illustrations than I think most bloggers do, but that would be the filter working as intended.

I do worry about taking scores and readership metrics too personally. When I published my first code review article back in 2017,[1] it flopped on Hacker News and on reddit, the two places I was hoping it would find readers. I felt awful because I spent more time on it than I'd ever spent on a blog post before.

I took a break from watching the metrics and went for a run, and that cleared my head. I decided that I felt good about the post regardless of what response it got, and I need to have that attitude on future posts. When I got back from my run, it had taken off on reddit, which made me excited but also kind of interfered with the lesson I was hoping to learn. But I do try to view my articles mainly in terms of how good I feel about them and maybe 20% what kind of response they get.

[0] https://twitter.com/deliberatecoder/status/13341560121321185...

[1] https://mtlynch.io/human-code-reviews-1/


Thanks for the response! I made that 2017 post compulsory reading for a bunch of code reviewers I helped train if that counts for anything :)


Wanted to add one thing I noticed after I've been on the reviewing side for a while. Add a section in the description of the MR describing how the changes were tested. If this is detailed enough and covers all the edge cases I worry about, sometimes I don't even need to look at the code to be comfortable with approving the change. Conversely, it's frustrating when I get a MR with a blank or generic description without any notes on purpose, approach, how it was tested, etc.


Oh, that's interesting. I'd really only want to do that for things that are hard to test automatically. I have 100x more confidence in a test where I can see the source for it and watch it run in CI. But I agree that for exceptional cases where it's hard to write test automation, it's a good idea to explain how the author verified the changes.


This is an excellent article, making several points that I would not have thought of if I were to write on the same topic (point 11, for instance.)

I would, however, like to make a point about commenting, as there are at least three related cases where renaming identifiers and restructuring logic is not sufficient. The first thing is where one is doing something non-obvious on account of an unintuitive (and possibly undocumented) fact about either the problem domain or some remote part of the the system that it is infeasible to change; the second is some non-obvious corner-case (a subtle issue of thread- or exception-safety, for example); and the third is where the algorithm itself is non-obvious (Jon Bentley wrote of a colleage who found that one of his algorithms had beem 'fixed' and 'improved' to the point where it was no longer efficient; a suitable comment might have avoided that (and there's also the famous /* You are not expected to understand this */, for which no mere restructuring or renaming seems to offer an effective alternative.))

I am sure you are aware of these exceptions, but not everyone on the comment-deprecating bandwagon seem to do so.

https://en.wikipedia.org/wiki/Lions%27_Commentary_on_UNIX_6t...


Thanks for reading!

I absolutely agree with all of the exceptions you listed. I'm totally in support of comments when they're used judiciously.

To add a bit of context, a tendency I've noticed when I encourage people to answer questions with the code is that they think that means "add a comment in case anyone else has this question," when really what I want them to think is, "rewrite the logic so nobody else has this question. If that's not possible, add a comment." In other words, a comment should be the last resort for explaining code rather than the first line of defense.


First I just wanted to say I loved the doggo and the general tone of the post.

Also wanted to say that there's a lot of parallel between code reviews and "work review" in Investment Banking. Tooling aside, the underlying concept behind your tips is likely true for a variety of sectors, so I do wonder if one could dig a level deeper and find that essential truth in how to properly manage write-review workflows. Just some food for thought in case you want to jump into that rabbit hole.


Thanks for reading!

That's interesting about "work review" in Investment Banking. I've never heard that term and Google isn't helping me out much. Do you have a recommendation on where I could learn more?

I have noticed that things like this aren't too far removed from other fields. When I wrote my article about reviewing code from the reviewer's side[0], I got feedback that most of the techniques apply to giving feedback in other domains. And while I was writing my guide to hiring writers,[1] I realized that the process is 80% identical to my process for hiring developers and graphic designers.

The thing that's tough about going more abstract is that it becomes hard to convince any particular group that the content is worth reading. Focusing on code review appeals to HN because a large percentage of the audience is developers, but if it was a more general guide to improving review workflows, I'd have a harder time convincing HN readers that it was relevant to them.

[0] https://mtlynch.io/human-code-reviews-1/

[1] https://mtlynch.io/hiring-content-writers/


Love the concept of reviewing your own code first. I’ve been doing that for the longest now and usually find something I missed when reviewing my own code. I think I have taken this for granted and assumed everyone does this.


Thanks for reading!

Yeah, I had the same assumption for years. And then I started to realize that people would submit PRs to me where there's a ton of debug logging code left in or git merge markers they accidentally didn't fix. I thought, "Wait, are some authors not even reading this before they send it out?"


"fall in love with you" is catchy but not the right interpersonal dynamic. Something like "love reviewing your code" is better.


Sure but that wouldn't get on the 1st page, right?


The one weird trick that will make your coworkers fall in love with your Pull Requests.


This is good guidance to junior devs. I see these issues frequently from the reviewer side.

The title is super creepy though, so is the claim that "your reviewer will literally fall in love with you". I review code from my team every day, including code written by young engineers, male and female. I can't point them to this.


Ugh. Stopped reading after this bit:

  They’re going to fall in love with you. Deal with it.
  Nobody ever complained on their deathbed that too many
  people fell in love with them.
That's just so outright dumb + wrong. Assumed the rest of the article would be of similar low quality, so left.


That was in jest. I assumed readers would understand that nobody's going to fall in love over proper code review technique.


Can you write a counter article?

Unfortunately, I'm incredibly handsome, charismatic, and just generally awesome, and as a result this keeps happening to me.

There's always lots of drama when I just want to remain co-workers, and I've had to move on often.

Help appreciated.


This is great! Many of these things are slowly learned over time, but explicitly listing them out will be helpful when refreshing or teaching others.

The three that have been most helpful on our team are: #1 (review your own code first), #2 (write a changelist description), and #5/#6 (narrowly scope changes/separate functional/non-functional changes).


Well, as much I have seen code reviews in my life (ignoring a few people that have OCD), I have noticed that the trend is to:

- add whatever comment that comes to your mind and you know wont be argued about (like using spaces instead of tabs) to have your persona look better on management statistics

- not deal with actually understanding the code but rather argue about syntax (dont malloc, use vector.reserve(),...)

- dont press accept until few lines are exchanged... again, boosting the statistics instead of giving something meaningful

- ... whatever boosts the reviewer visibility, even going ballistic on irrelevant parts of code (while no one reviews 3rd party dependencies where most of the evil is)

My proposal for code quality is different. Put together a team that communicates (a rare skill nowdays) and LET THEM communicate, offer them free beer afterwork, whatever interests them. Dont use enforced meetings like scrum etc., they just show that you have failed putting together a functional team. Make developers friends instead of rivals. Then avoid JIRA overhead, avoid management visibility,... just listen what people are saying (and preferably become their friend too), surpres any outside world influeces from your team and fire anyone trying to ladder over anyone else. Then you wont need code reviews. And you will always have a long list of improvements at any moment. Just send an email and ask.

Be capable of coding. Be really good at coding. Everyone will follow you after "shit hit the fan" and team had to work over weekend and you have stayed with them and help them fix the bugs, analyze the code. DONT be a manager, be helpful part of the team (if this would be industry standard we can just fire 99.999% of managers).


This is possibly the worst advice I've ever encountered about how to run a functional software team. It sounds like you have worked with a succession of terrible people if this is your impression of code review or team communication, so I can certainly understand that would have an effect on your ideas.

I've encountered this sort of team before – the high-functioning jealously-protected team that's isolated from outside interaction and visibility. They usually produce absolutely wonderful high-quality software that is useless and meets none of the requirements that other teams have of them.

In fact, I'd say that well-functioning teams will usually themselves develop a system of code reviews or similar, along with using task-tracking software, regularly seeking influences from the outside stakeholders, and making sure work is visible to other teams. A good manager supports those interactions and helps to make sure everyone has the resources they need to work effectively.


Worst advice? I am just commenting what I am seeing. Worked as a contractor in a bunch of teams in large companies, quite a few blue-chips.

Yep, I was quite amazed when I came out of pocket size companies, it is quite amazing how f*-up everything becomes when you have a few layers of management, methodologies, enforced way of working, measuring "effectiveness" that affects your promotions, when scrum meetings suddenly become just a simple way of filling in burn out charts for manager, when number of solved JIRA tickets becomes more important than what was solved ("yes you have solved a bug that could prevent nuclear core meltdown, BUT your X coworker solved 20 bugs in that time"), when documents are more worth than code, when people are forced working together even if they fit together like an oil and water "as they have right skills" etc.

Can someone still remember the times when developers were paid by the lines of code? It all comes from the same type of thinking.

And this all came out when there was a huge push to make development streamlined around 15 years ago. And it is certanlly not getting any better, if you are having person who wants quality of code, you can be 100% sure there is someone trying to make his promotions work easier.

Sure, maybe I had a bad luck to have never find "the right company" as they (beyond some scale, lets say a few hundred developers) all seem very similar to me. Same bs, wherever I look.

And I agree with you, when the company is small it works the way you have described.


This is a big part of why I've started pushing for aggressive autoformatting and linting. The automated tools and checks are sometimes annoying, but automating away checks for the low-value stuff like this helps encourage the human reviewers to review for actual bugs and design problems rather than just taking potshots at the easy stuff.


- Anyone focusing on style issues in a code review is doing it wrong and really just wasting each others time.

- self documenting code isn't possible because the code cannot possibly document what isn't there. In other words sometimes the issue important details are in what the code isn't doing (such as not setting some state or not making that function call under some circumstances etc.


A couple from my pet peeve list -

1) ATOMIC COMMITS

Nothing worse than a single commit, to address 10 separate comments. I don't want to have to go back, and cross reference my comments with your wall of changes

2) DO NOT SELF RESOLVE A COMMENT

Wait for the reviewer to checkout your solution, I want to learn through your fix, aswell as make sure it's what I expected.


Agree on (2), but I respectfully disagree with (1).

It adds a lot of friction for the author to have to make 10 separate commits to address 10 comments. I also don't want to review 10 separate commits.

The majority of my comments are on separate chunks of code, so it's not that hard for me to see the resolutions to all of them in one big diff.

Edit: I'm assuming the commits will be squashed at the end of the review. If you're preserving the full commit history, then I can understand why you'd want the atomic commits for each note addressed.


No problem I understand, it took me a while before I ended up sticking with (1).

I still think it's much easier for the author to make those changes, and attach a commit ID to the relevant comment, than it is the other way around.

As the article states, respect the code reviewers time. If i've put in some effort to point out issues, and you don't even spend the effort to address each comment individually, then you are not respecting the reviewers time.

Maybe there are some cases you are right, but as a general rule i'll just always follow (1) as an author, as it honestly doesn't cost me much time, and makes my reviewers life easier.


Do you make an additional commit for each review comment? As opposed to just amending the old commit. Why?


You can squash into a single commit at the end

Creating a fresh commit for each comment, provides me an easily clickable diff link


tbh sounds to me as if you review way to large changes if there's 10 comments requesting changes that are not clearly attached to a small code location (and thus can be reviewed through a line/section specific comment, and thus easy to see the individual change)


I do agree but some features are complex, and invesitbely PRs will sometimes be large. Using this tactic it help keep things in check even then.

I just want to be able to open up a review, click a commit ID to view a diff, and be able to move on

Any additional steps are unnecessary friction I feel


This is some great advice, although I'm not completely sold on number 8 and number 9.

For number eight: If your reviewer is being a douche, I don't think there's harm in pointing it out, respectfully. Nothing says that both sides can't learn from the review.

As for number nine. I'm not sure if adding comments is going to completely help; I'd rather refactor to make the code as explicit as possible, including the tests[1]. But more importantly, I don't think that sacrificing possibly more idiomatic code for the sake of beginners is a great trade-off. I think it's better to encourage people to get more familiar with the language and technology being used.

[1]: Which is usually my first stop for when I might not understand something in the code itself.


Number 13 inspires a question: I wonder if there is any code review tool which will tell how many other (open) reviews a potential reviewer is on when you're about to add them to yours. I haven't seen it in any tools, and it would be a nice addition...


Great list, but I'd argue that point 6 is more often indicative of tooling failure. Why is it hard to review something that changes both formatting and the code? In the example it is because the tooling is visualizing both kinds of changes in the same way. But there is no fundamental reason for why that needs to be the case.

The same applies to refactoring. It seems perfectly doable for a code review tool to detect code motion, and have differet treatment of the code that moved unchanged vs. the parts that were modified.


I agree tooling should be improved, but the change should be legible in more than one tool. Chances are good that someone will end up looking at the diff in the future in their favorite IDE, on the command line, or in who knows what else tool.

The simplest and clearest solution is to split out the changes in separate commits.


It depends how aggressively your reformatter changes your code. For example, Prettier will take this JavaScript line:

  document.addEventListener('keydown', e => { console.log(e) });
and convert it to this, which changes both whitespace and syntax:

  document.addEventListener("keydown", (e) => {
    console.log(e);
  });
I don't think there are any diff tools that can hide the differences there, even though they're superficial.

[0] https://prettier.io/


You could try https://github.com/josephfrazier/prettier-diff , but it's inconvenient to ask members of your team to download an entirely separate diff tool


Oh, that's interesting. I didn't realize that existed. Thanks for the pointer!

Personally, I wouldn't want a niche diff tool globally wrapping my diff views. I'd worry too much about missing something important.

And I agree that it's also a tough sell to get your teammates to all use this one particular diff tool.


Better tooling is definitely feasible, but confounding formatting and semantic changes also messes up blame for the affected lines and often makes a line's history misleading.


Blame is also a tool. If a reasonable change messes up blame, that's a tooling problem :)

For example git-blame can ignore whitespace, detect code movement inside a file, and even code movement across files. There's no reason why that has to be the limit of how smart blame can be.


The problem I've found with this statement is that then no one ever changes formatting for fear of "messing up blame for the affected lines," and then no reformatting ever gets done. At some point, take the leap and do it (assuming you're not completely changing the formatting of the entire file/project/what-have-you).


Unnecessary refactoring and whitespace changes are hard fails for my code reviews. There are rare exceptions where refactoring is essentially intertwined with the change being performed, but they are rare.

I need to be able to see the intent and minimal definition of a change. Anything that obscures that is obnoxious not just to the reviewer but everyone after who might have to merge or understand the change. Perhaps it is because I manage code that is "high risk" and hence "I can't understand your change" equates to "it can't go into production" makes a difference.

Apart from that, it also ruins the chance to cherry pick / separately merge parts of the change.

To me this is pure laziness and selfishness and lack of discipline on the developers part.


> Why is it hard to review something that changes both formatting and the code?

The opposite problem is also a tooling problem: Why is it hard to save your changes to code, but not the formatting.


This can apply beyond a white space change, but rather larger non-functional changes as well. Something like moving a function up a layer to make it available elsewhere before using it elsewhere. It this point, the "automatic detection tool" is whether or not the current tests fail without having to change them.


Just complain to management that your reviewer is being mean and that you need to get this feature deployed right away for growth. Promise to monitor closely in production and go home.


A bit of a controversial one: give your reviewer one obvious (but minor) mistake to catch. For some reason, it feels really unsatisfying to a lot of people to conduct an extensive code review, and come up with exactly zero items to change.

This is likely more relevant when you present something to a business leader who has the feeling they have to contribute something, but it can also apply to inexperienced code reviewers.


I've toyed with this idea too, but for a slightly different purpose.

When junior developers join my team, I find that they're often shy about critiquing anyone else's code because they assume they're the least skilled person, so they couldn't possibly find anything to be improved in anyone else's code.

I've considered sending developers like this a code review and including the note, "And just so you feel comfortable suggesting improvements, I've intentionally left in one mistake (not necessarily a functional bug)."

I've never done it because I worry it comes across as condescending. Like, "Obviously I'm perfect, but to give you something to do, I added a mistake, and now it's your job to go digging around for it."

Instead, I try to reassure junior developers that they should feel comfortable making suggestions and asking questions in the code review, even if they're not finding things that are officially "bugs."


I'm not sure that's particularly great for confidence. It will feel like a test, and they will feel dreadful if they don't find it. The whole process will be very stressful.


This is a very interesting idea to me.

Over the years I noticed that reviewing the same code knowing a bug is there will increase my chances of finding it comparing to not knowing a bug exists.

It would be interesting to measure if introducing a "catch" in every PR would increase the number of times the reviewer will find additional issues.


I think a change of framing might be helpful here. Put in multiple bugs, and call it "training" instead. If you have multiple junior developers, have them all do it and compare results, reinforcing the idea of multiple reviews.


Just put the mistake in quietly, and if they miss it, notice it yourself later or ask someone else to comment on it


> For some reason, it feels really unsatisfying to a lot of people to conduct an extensive code review, and come up with exactly zero items to change.

You should fix that instead of trying to hack around this. Code review shouldn't be a nitpicking party for people that need to drum up their egos.


I'm not sure this is controversial so much as it is just terrible advice!

Are you really suggesting that developers purposely introduce bugs into a code base just to keep reviewers on their toes?

It may seem like a relatively harmless idea in the small, but in the large, imagine hundreds or thousands of developers taking this advice, and purposely committing bugs. Yikes.


No one said to commit bugs.


If it's in a code review, it's been committed to some branch in source control. If it makes it past the code review, it's now a part of your product.

If people want to use this approach as a training exercise (as mtlynch suggested in a sibling post), that's one thing, but that's not what OP suggested, and what OP suggested seems like pretty risky advice.


If the deliberate mistake makes it past code review then you just remove it before merging anyway. A workflow where you are not allowed to fix anything in your own PRs before merging them unless they are explicitly requested by a reviewer would be very strange.


Feels condescendingly manipulative ("I know your ego/self-esteem are low, let me throw you a bone"), and would just be more work for everyone; for them to articulate the problem they find, and regardless of whether they catch it or not, you have to now remember to go back and fix it before merging.


I thought this was a dupe but actually this received avid discussion on reddit 5 days ago: https://www.reddit.com/r/programming/comments/k5cbln/how_to_...


I find myself nodding in agreement with points 6 and 7 (about mixed and large changelists), but unsure of how to implement the suggested fix.

Given that I don't control the tools used, how do I ensure that functional changes aren't lost amidst formatting changes? I would love for BitBucket to ignore formatting changes in my PRs, but alas, no.


I can only speak from using GitHub (which I'm not overall very happy with anyway), but PRs on my team can recommend review either "by file" or "by commit", with a very strong preference for the latter.

"By file" means that the reviewer should look at the diff between the branch base and tip, ignoring the intermediate changes along the way. If the individual commits really don't matter, "by file" suggests that we ought to squash this branch into a single commit; and if they do matter, something prevented us from providing a proper, clean narrative. "By file" is a smell.

"By commit" is what the article assumes. Rather than look at the sum total of changes, you look at what each commit individually changed.

In GitHub, "by file" corresponds to reviewing by the "Files changed" tab, while "by commit" corresponds to reviewing by the "Commits" tab (and going through each commit in order).

At the command-line, "by file" corresponds to `git diff develop..feature`, and "by commit" corresponds to `git show --reverse develop..feature` (which shows all selected commits in order from least to most recent).

> Given that I don't control the tools used, how do I ensure that functional changes aren't lost amidst formatting changes?

Given the above, you can and should isolate those kinds of changes into separate commits, so that the functional changes are the only visible changes in their respective commits.


Sadly, it doesn't appear BitBbucket has similar options, but good notes anyway, thank you!


BitBucket's documentation shows similar tabs -- is this not what you see?

https://confluence.atlassian.com/bitbucketserver/review-and-...

The section on "View a single commit within a pull request" shows something very similar to GitHub's "Commits" tab.


Oh! I see now. I misunderstood what you were describing.

Yes, okay, I do sometime click through and look at each commit, I just never thought about default to that, or recommending it

Extra thanks!


Enjoyed the illustrations


Thanks! I work with an illustrator named Loraine Yow, who's fantastic.[0] I wrote a blog post a few years ago explaining how I found her and the process we use to create the illustrations together.[1]

[0] https://www.lolo-ology.com/

[1] https://mtlynch.io/how-to-hire-a-cartoonist/


I didn't recognise your name - it just so happens the only other post of yours I've read is [1] https://mtlynch.io/how-to-hire-a-cartoonist/!


The title of the article makes it seem like having someone fall in love with you is a desirable outcome.

> your reviewer will literally fall in love with you

No, author, no they will not. They might appreciate my professionalism and focus on correctness if I follow this advice though.


Stop giving me 5000 line PRs. I get you are doing the jobs of five people, and maybe you haven’t mellowed on the Adderall yet, but I hate these reviews and I’m not doing as good a job as I otherwise could.



this only works when the company actually cares and allocates time to improve code quality. Not workable in companies where you are expected to add 2-3 features daily.


If you are expected to add 2-3 features daily, first, look for a new job.

If they stick to it though, you're basically forced to reduce the feature's scope to the bare minimum. So instead of a "implement this form", you'd have tasks in the form of "implement this form field".

I don't know what situation you're in, but to me it sounds like they're steering towards giving little commitment and cheating the system to meet some arbitrary counter-productive productivity goal.


At a more agile org, getting 2-3 stories done in one day isn't too extraordinary. A PR that follows some of the rules from the OP (especially 5/6/7) might only take 5-10 minutes to review completely.

If you can't slow down enough to do that, or your stories are so large they require more attention, that might be a red flag situation? Writing code faster than you can review it will eventually catch up to the team/org.


Maybe at a tiny startup org that doesn't have many users or sensitive data or money.


Also only works when a team has a network of trust.

Every company I have worked for in my 20 plus years has been a lead/architect/cto/boss and everyone reporting to that single person. Reviews when done take many weeks.


How to make code reviewee fall in love with you:

1. Make a list of your points

2. Throw away half of them

3. Publish the rest


This is marvelous


This is good, and I would like to send it to a few junior engineers that I mentor. However some of them are girls and the title, unfortunately, makes it a bit unprofessional to send out. I am really nitpicking here, but please don't title your blog posts like this. How to Impress your Code Reviewers might have been a better, safer title.


assumes your reviewer is competent


Coming to Netflix, Christmas 2021


How can you tell someone is wrong in Programming?

Wrong outputs are checked by tests and you could do time studies.

However that's after-the-fact. With abstraction there isn't a correct answer from code alone (outside possibly nested loops)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: