Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How do you review code?
357 points by skewart on April 3, 2016 | hide | past | favorite | 140 comments
I'm hoping to find ways to improve the code review process at the company where I work.

My team has a fairly has a fairly standard github PR-based process. When you have some code you want to merge into the master branch you open a PR, ask another developer or two to review it, address any comments they have, and then wait for one of the reviewers to give it an LGTM (looks good to me).

The problem is that there can be a lot of lag between asking someone to review the PR and them actually doing it, or between addressing comments and them taking another look. Worst of all, you never really know how long things will take, so it's hard to know whether you should switch gears for the rest of the day or not.

Over time we've gotten used to communicating a lot, and being shameless about pestering people who are less communicative. But it's hard for new team members to get used to this, and even the informal solution of just communicating a ton isn't perfect and probably won't scale well.

So, has anyone else run I to similar problems? Do you have a different or better process for doing code reviews? As much as this seems like a culture issue, are there any tools that might be helpful?




We had a similar problem and have made a lot of progress by reducing the size of changes submitted for review. We used to submit code only when we thought a feature was complete, and the median review was probably 1000 lines. Now we're closer to 100-200.

This has a ton of positive effects:

  - Reviewing code is much less daunting
  - Reviewers give more detailed feedback
  - Developers get feedback on lower layers (such as model definition) before writing a ton of code that depends on it
  - Reviews come quickly enough that developers don't need to switch to another project before continuing
It was a bit difficult to make this transition. We started out by explaining what we wanted to do (submit shorter diffs) and why, and people generally were on board. But most people didn't know how to do it. It's not as simple as submitting a diff whenever you write 100 lines of code, since the diffs need to be coherent. Here are some techniques we used:

  - Use feature flags to commit unfinished features to master/trunk without exposing them
  - Submit model changes as their own diffs, then API changes, then UI changes
  - As you work on a feature, pick out parts that seem ready for review, stage and commit them, send for review, then branch again from that point and continue working.  This takes a decent amount of git-fu, so train folks accordingly.
Another thing to note is that our reviews are not at all evenly distributed around the team. We have a team of 13 engineers, and it's probably an 80-20 split—80% of reviews are done by 20% of reviewers


I agree it's important to reduce code review sizes, but I try to avoid reviews that end up leaving incomplete code in production such as having individual domain changes and feature flags. If a feature is cancelled before it's completed there will be unused code checked-in - which to me is technical debt.

I prefer to split domain changes into different commits on the pull request so they can be inspected individually - eg #1 Create API, #2 Consume API with model, #3 View Logic. And for feature flags, try to figure out a possible fully implemented subset of features - eg show new model properties to the user that will in the future be editable.


Feature flags are huge win for building reliable systems. You want them. Incomplete implementations should be modular, like complete implementations. Don't fool yourself, all code is tech debt. If you aren't writing modular PRs, you are in for a world of pain down the road anyway.


Also don't allow commits which break "git bisect". There are two ways to deal with this, either force all commits to be complete, or document that some patches will need to be squashed together before they are committed to the upstream repository.


This could be avoided by not merging into your master branch, and having a feature branch, from which subsequent branches are made. Then the dev can open pull requests to merge to the tertiary branch into the secondary one without affecting master.


If your team uses feature flags, and it should.

The only healthy thing to do, is to set a deadline for each. When the deadline comes around, the code should be deleted, or a very valid reason should be brought up to extend the deadline.


Your last 3 points, especially the last point seems to require a lot of discipline to pull off.

There's an article IBM's developerWorks that says code reviews are the most effective when devs are reviewing 2-400 lines of code at a time. However, when you have team members spanned around the globe, it's difficult to get quick feedback, and obviously one shouldn't wait until there's feedback. Some people on my team will also open up a PR really early. What I'm saying is, there are a lot of roadblocks to review the code your team members many time zones apart and still give them the feedback they need on a timely manner. How would you adjust the approach described above to fit team members that are 7-8 timezones aparts?

http://www.ibm.com/developerworks/rational/library/11-proven...


I'm based in LA and work with a team in Germany. I think we're most productive when we devote the morning to reviewing code and the afternoon to writing code that can be put into a PR by EOD. This way nobody is blocked. It does require a ton of discipline though because if you don't do the morning review you will fall behind and basically get stuck perpetually reviewing PRs without any time for writing code.


Is that review lopsidedness on purpose? IE, do devs go in and choose to do an arbitrary set of reviews when they feel like it? or are they assigned?

The one code-review-heavy team I worked on did a modified round-robin style. The team wiki had a list of all devs and a count for how many reviews they'd done during the current (3-month) phase. You go in, choose a review with the lowest count, increment their count, then assign them your review. This way, they aren't available again until anyone else who had their count has also done a review.

You could still assign big reviews to senior devs and small reviews to juniors, because there would usually be multiple people with the "lowest" count -- ideally, at some point every dev would have 10 and would then gradually all increment to 11.


Wowz why hasn't anyone automated the process of counting reviews?


We've achieve a similar thing, but through the use of feature branches. Tickets are branched off a feature branch and merge requests are made back into it which are reviewed one by one.

There is still a final review when merging the feature branch into the upstream branches, but it's merely a scan and not an in-depth review.


Have you had any issues with some parallel features overlap in their changes? IE: messy merge conflicts in models after two teams took base functionality in different directions?


  Please manually add newlines to your blockquotes,
  otherwise your comments are near-unreadable on
  touch-screen devices. And they'll be much better
  on mouse-driven ones, too.


^ Skimming through the page I thought this was an example of a bad PR comment ;)


What you describe is:

* There's a group of people, and at least one is needed to review.

* The reviewer has to decide to either make an effort or freeride by letting someone else complete the review.

* If no review is done, then the group cannot benefit from progress.

That in game theory can be represented through a model game: the volunteer's dilemma.

https://en.wikipedia.org/wiki/Volunteer%27s_dilemma

The fact that nobody is reviewing the pull requests reflects that your team members are competing against each other, to the detriment of the group.

What you need to do is to redefine the PR process so there's a main assignee (use the github assignee feature), and that is the person accountable for completing the code review. If you diffuse responsibility among competitive people, the group loses.

The same happens with technical debt. If the group is competitive and managers are non-technical, fixing technical debt is not rewarded, leaving it to volunteering. Technical debt will continue to grow until a person volunteers to fix it. That person will most likely won't be rewarded at all. They will only enable competitors to compete in an easier way.

Unfortunately this also denotes a crappy culture. I don't really recommend you to spend your career in such shit culture.


Even worse, working on technical debt may be viewed as a negative thing ("if it's working why did you change it?" said the phb).

I've yet to find a code review process beyond a trivial implementation that scaled. The best results I've had were with a rotating buddy system. (Every week/month) you review someone else's code and they yours.


There are some static analyzer metrics as well as metrics you can obtain yourself. These metrics relate to the cost of maintaining code:

- cyclomatic complexity

- statement length, function length

- number of singletons / global instances

- dead code

- duplicated code (intelliJ has a good duplicate detector that anonymizes variables and ignores whitespace and comments).

- number of known functions/types per file (coupling)

- number of commits per file (git-effort from git-extras)

Some of them are tricky to obtain. Making a simple script that parses import statements and outputs a DOT file with a digraph is useful to map dependencies, which is good to explain coupling.


I disagree that it is necessarily toxic. We have a similar issue where I work and I feel our collaborative culture is fantastic. What gets in our way is you can't expect someone to drop the code they are writing and context switch every time a CR is needed. So the CR happens at the next available context switch from whoever context switches first. Sometimes that means CRs are near instantly picked up. Sometimes they are picked up the next morning, usually because it is possible that the CR request got lost in the noise of our chat room. This will get caught in the morning stand up.


True, this is not necessarily toxic. Sorry for the emotionally loaded addendum.


Exactly. If reviews are part of the process, then they're someone's job. Either someone isn't doing their job or management hasn't made it clear what their job is. Both have easy solutions.


However if you have strong collaborators it's not necessary to intervene.

Basically competitive people identify their utility function and start working towards maximizing that. You need someone with authority to define their payoff matrix, making it explicit that they are required to collaborate to "win".


I think you're over-analysing "do your job or you're fired".


I'm the OP. For the record, people on my team do review PRs. And we typically do assign PRs to people in github. It's just that we don't have a very structured process beyond that, which leads to some mildly annoying ambiguity at times. I'd say the situation is basically what sethammons described in a sibling reply.

That said, I really appreciate your comment. It seems like a lot of other people have faced similar issues with CR, and I'm sure some people are facing a situation where teammates resist reviewing PRs. Also, game theory is a great lens through which to look at the situation. We probably could incentivise communication around CR better.


Our code review policy is to require a minimum of 2 people who did not contribute to your changeset review and approve your code prior to merging. We've been able to reduce the amount of time a review takes in four big ways:

(1) Automated style checker. By adding a pre-commit hook to validate code style in an automated way, the amount of noise in PRs went down dramatically.

(2) Don't do architecture review in PRs. If you're going to make architectural changes, do so in a lightweight design document and get approval there first.

(3) If it languishes, go and physically (or virtually) ask the specific people to take a look. This works best in my experience when you find someone else who's having trouble getting their code reviewed and offering a trade. This person is then likely to review your code quickly in the future if you do the same.

(4) Feature flags allow you to get incomplete but atomic and correct units of work merged to master without showing them to your end-user.

This way, code reviews are not about style (much), not about architecture (much) and simply about making sure you executed on the plan correctly. Turn-around times went from days to hours in many cases.


> (2) Don't do architecture review in PRs.

The issue I'm finding in reviews is often that I end up with a summary of "this problem can't be solved nicely on the current architecture of this module so there should have been a redesign first". Such a review is essentially saying "start over from the beginning" and it's a huge waste of time. Often you just don't respond with that, instead making a mental note to fix the technical debt, and you do a superficial review of what was actually written.

(In this scenario the developer is inexperinced or not familiar with the code base, so wouldn't see the architectural problem, and instead tries to implement the feature in anger on the current architecture).

One could require the lightweight design document approval first for all issues, but that would be a huge overhead. I have banged my head against this problem for years.


Light document is exactly how I do it. I do mostly backend and API design, and usually when it is up to me I flesh out a spec (one page, after discussion with the main client, being mobile devs, web devs, and so on) then have all stakeholders ok it. Surprisingly when its a one page prose everyone reads it pretty quickly. As long as the end result code matches that spec the other technical comments are not that hard to deal with and usually easily reversible should I merge then realize there was a mistake.


+1 to design docs and feature flags. We started to keep design docs in the repo, too and have the design review process work the same way (if slower) code review does. So far so good.


I think automated style checkers are great too. One problem I have with them is it fails the build, which has poor locality with the problem. Instead, I wanted the failures to actually comment on the line of code. That's why I wrote an automatic style checker to do that. You can run without having it baked into the company process (just from the CLI), which was important for me at the time. You can find it here: https://github.com/justinabrahms/imhotep

I also wrote a service version of it at https://betterdiff.com/. One relies on the other, so the CLI app will have the same feature set, just not automatic. Several people use the CLI app from jenkins or similar. :)


> (1) Automated style checker. By adding a pre-commit hook to validate code style in an automated way, the amount of noise in PRs went down dramatically.

Would you mind sharing your precommit hook? We have a simple one that prevents console.log etc., but nothing super fancy.


http://pre-commit.com/ might help you out.


Here's mine

    grunt eslint
    npm test
I'm using eslint. Example: http://github.com/greggman/twgl.js/


At a place I worked at, we did over the shoulder code reviews. You go to someone in person, ask them to do an OTS and you go over the code with them explaining anything. It meant most code reviews took about 5-10 minutes and was pretty fast.

People didn't review as deeply as the traditional async process, but code reviews were never an issue that delayed people and took about as much interruption time as someone asking you a question about something.

The downside is it didn't scale past a single office or with remote workers and it's hard to do such things as automatic blocking reviewers and such.

----------

Now I work at a place that does traditional phabricator code reviews and has all the problems you mention. But people do review more thoroughly and it does scale past offices.

Personally I deal with it by messaging people asking them to do a review after I post it, and talk about how it's important that we get reviews done quickly during social situations.

Everyone gets annoyed how long reviews take when this is the case, so it's easy to agree about. I also offer to give people code reviews when they inadvertently complain about how long code reviews take, and they start to appreciate a helpful & quick attitude to code reviews and start to reciprocate.


I also worked at a place where we did OTS reviews. In my experience it was really good and it was easier getting deeper/more thorough feedback, since you're basically forced to tell what your code is doing etc.

I'm really curious about how you feel people review more thoroughly when doing an async CR? Do you think there was an issue with people not feeling comfortable criticizing in person, or feeling like they'll just say "go on" even when they don't understand a code block, just b/c they didn't want to feel unproductive having the other person wait or something like that?


I had a bit of a problem with OTS reviews b/c not everyone is as professional when it's not in an open forum. We used GitLab merge requests with much better results.

It's kinda hard to describe in words - but programming is an opinionated profession and people have a bad habit of telling you that you should rewrite stuff b/c of code-style differences and what in essence is non-substantive feedback. They want everyone to look like the way they would write it. Not to say it's always without merit.. but it's honestly kinda irritating after a while b/c you feel you are being micromanaged and are losing ownership of your work (when reviewed by someone who is senior to you)


You should be coding in the official company coding style--or one should be written one for you to follow. In my experience supervising junior devs, reading code is like driving down a highway. When you submit code that is written out of sync with 99% of the entire company codebase, it's like going 60mph then suddenly the highway becomes a bumpy rocky dirt field. It's not really an opinion thing, it's a consistency thing. The time for opinions ended a long time if you are not the first programmer.

You wouldn't join a law firm and start ignoring their legal document guidelines, because their senior lawyers need to read your written documents at the same fast speed they read everyone else's. We can spot your real mistakes a lot faster if you stop cluttering the codebase with the numerous cosmetic ones. Your seniors are pissed because you are paving a dirt road for them to drive on. You shouldn't be upset that they are asking you to do the bare minimum. If they don't have a formal style guide though, then they deserve the dirt road.


well said


It was just a trend I noticed, or maybe a difference in cultures. I notice my coworkers sometimes spending 15-20 minutes going over my code sometimes in phab. While at my past place they didn't dive deep into the code. People also don't feel as comfortable bringing up pedantic crap in person as much as they do over a web interface as you said.

I think with OTS your going at the pace of the presenter, while with an async code diff, the reviewer is going at their own pace, can hop back and forth in between things and so on. It's also hard to keep track of things you have to change with an OTS, while with async code reviews the request list is right there already.


Sometimes we do group or individual OTS style CRs (new term for me, "OTS"), but with a twist. The reviewer walks through and describes the code, not the one who wrote it.


We do both here.

For some code, eg shared libraries, we're distributed so it has to be via PR.

Better than that though is OTS review, which is what we practise on our projects where we're all in the same cubicles.

Basically, when we're ready to commit code, we call another team member over and walk through the changes. Besides catching bugs, this often results in refactoring - things like renaming stuff to make it understandable (one of the most crucial problems in coding) and extracting methods. This is so much more effective and less time consuming than the typical PR process.


We use a mix approach. In most case, we do OTS as stage 1 and then developer can merge their individual branch to develop branch. And then later reviewer will do a in depth review independently.

For complex/critical logic kind of code, on the other hand, the reviewer will try to think over a general solution before look at the code. And then trying to poke around the code to find flaws.


Reviewing is switching gears as well. When you're coding you can't just jump into a code review mode for a moment. It's not a culture issue, this is IMHO inevitable for any work environment that requires deep concentration and immersion into the work inside your head.

So you'll want to batch the reviews somehow, based on your schedule. On the other hand, as a programmer, you don't want to be blocked by code reviews. So you'll want to finish a somewhat self-contained set of changes, push them out for a review, and continue on top of those locally. If the reviews suggest new modifications, you'll do those and rebase your local new work on top of those. It doesn't matter that much how the changes are reviewed as long as it's not blocking work. There should be an emergency vent for the cases when unreviewed changes do block significant other changes.


I've made the same argument about the need for a programmer to not be disturbed. A colleague of mine has eloquently compared it to juggling: you start with one ball, add another and it gets complex over time. Disturb him and he starts over with one ball.

In my experience as a programmer I've grown to think that this argument only holds partial value. An observation that helped in this, is that programmers hate to be disturbed, but can always manage to be on time for lunch or beers on friday (it's a dutch corperate tradition to drink a few beers after the workday on friday).

I find myself, and others, to be capable of reviewing a PR within an hour. I just need to be aware that when I'm done with a coherent unit of code, that I should switch to review mode and check if there are outstanding PR's. To reuse the metaphor: I've stopped juggling and can choose which set of balls to pick up next.

So I do think it is a cultural thing. You need to commit yourself to have reviewing be a part of the work process. I find IM services and PR's to cater perfectly to the way the usual programmer works. He can ignore the message while he's mentally keeping 5 balls in the air and look through his messages when he's ready to start his next activity of the day.


Everyone is winding down during a Friday. After Friday noon, everyone is avoiding to start anything that will take hours. That's why they can leave the office.

On the other hand, when we've had company parties at the office, I've seen people go back to programming after a couple of beers because they just had something on their mind that they need to finish.

People can certainly review something within an hour or so, but only sometimes. If you just had another interruption your juggling balls are still on the table and you haven't picked them up yet. So this is when you can finish a code review ten minutes after someone requested it. But if the request came in a couple of hours later, you wouldn't be able to take a look at it until the next day.

Culture can affect whether programmers are able to get long stretches at all. If they aren't, then they're obviously more available to short-term tasks such as code reviews. On the other hand, I know people who work best when left alone for days. There's no asking for a code review if the guy has been juggling since Wednesday because if you do he'll have to spend the next week as well working on the same task.


"Culture can affect whether programmers are able to get long stretches at all. If they aren't, then they're obviously more available to short-term tasks such as code reviews."

Good point. I do like to distinguish between tasks of an architectural level, or at an implementation level. An architecture task can take a few days, an implementation task can take an hour. I find myself to be able to think only on one level at a given time. Doing code review while in my lead and architectural position is very hard, as that is on an implementation level. I also feel the need to lock myself away from the team to "think technical thoughts" when I have an architectural task to complete. I usually plan a meeting before I start on such a task, where I can present the design and discuss it. It's the architectural equivalent of a code review.

I do find it important to differentiate between the two types. Stories that are all about implementation can usually be split up into smaller tasks. A rule of thumb is that you should be able to finish a task a day. This keeps people from repeating the same things at a standup (the same story as yesterday) and organises tasks in small enough tasks, that you can do code review. A culture reflects how the company is structured and vice versa.

Is this kind of stuff explicitly planned at your company, or do you guys use a more laissez-faire kind of approach and everybody seems to find their own way contribute to the whole?


"Everyone is winding down during a Friday. After Friday noon, everyone is avoiding to start anything that will take hours. That's why they can leave the office."

Yeah, that's true...but it doesn't explain why programmers are almost never late for lunch. That's also been true, in my experience. And moreover, it's nearly universally true that the folks who complain the loudest about interruptions are the same ones who are johnny-on-the-spot for coffee-break, games, etc.

In fact, a more cynical person might say that it's all about avoiding unwanted work. Ahem.

Whether or not you believe me, the important point is that yes, it's a decision of culture: at some point, you establish a culture for your team. If that culture includes a maximum latency for code-reviews, then the people who don't like to do non-coding work for days at a time can go code in a cave somewhere else. People who can't be "interrupted" with reasonable non-coding communication activities tend not to be very good collaborators anyway, and probably aren't a good fit for a growing team.


For me doing code review is not just looking at diffs in git.

Things that I can review by looking ad diffs are not worth reviewing.

I have to at least checkout different branch, probably I have to change my configs (while stashing or saving copy of my current working context), rebuild solution to make code navigation works so I can at least navigate around. Now I start running variables and code paths in my mind which by changing context takes my cognitive load off my previous context.

Going for lunch requires locking my computer, maybe taking some money. Then I eventually talk about stuff that is not heavy load just generic stuff.


"it's nearly universally true that the folks who complain the loudest about interruptions are the same ones who are johnny-on-the-spot for coffee-break, games, etc."

I'm assuming here , but that sounds like people who appreciate a lot of structure in their working day. I can imagine them to respond very negatively to something as fluid as reviewing when they make the time.

My experience is, that it's the type of personality of solid programmers with very, very high attention to low level detail. High level stuff on the other hand, like planning or architecture is not their forte. They need help from their team for that, maybe even let someone else make those decisions for them. I want those guys and gals on my team, though, as I'm more of a high level guy myself.

Has your company tried to accommodate that type of personality, or do you weed them out during interviews or probation? I'd hate to see you guys miss out on that talent. I'd try setting a rule, like: first thing after standup and lunch is code review. This allows for structure and responsive code review.

I'm interested in what you think about this, as you seem to have a strong opinion on the matter, backed by experience.


Well don't you just sound like a charming guy to work for.


> I just need to be aware that when I'm done with a coherent unit of code, that I should switch to review mode and check if there are outstanding PR's

This is exactly what I have ended up doing. I'm not always successful (perhaps because it is not always possible), but I've definitely improved my reactivity.


We had a similar problem - a lot of pull requests being stuck in a back-and-forth remarks and fixes. The "social" solution was to increase the amount of communication in the team: instead of everyone having their own tasks and working in a silo, we made multiple people (ideally, the whole team) responsible for a task, splitting work as we go. The technical solution was to use pair programming instead, at least for some changes. Both worked really well.

Pair programming achieves a similar result to code reviews (another person knows the code very well, and is able to criticize it and improve on the design). It also does it much better because the review is done as the change is being made, and not after the fact, which improves the understanding of the reviewer and allows them to give immediate feedback.

The old code review process still works as a fallback, and is especially good for small but tricky changes: I need to do something novel, it's not a big change but there are a lot of ways to do it, can you comment on my approach.

One other thing that we also ended up doing is "ping-pong pull requests": if you give me a pull request, I don't just criticize it leaving the issues for you to fix, I fix them and leave that for you to review. Then you can do the same, etc. As a result, both people get better acquainted with the code, instead of just one of them being the "owner" of a change.


We tried a few methods and eventually went to pair programming. But never tried the ping-pong pull requests. It sounds interesting. What gets me off for pair programming is that it's not always feasible. If the other person who could work with you is on another task you spend a lot of time try to play catch up. How well did the ping pong work?


It works well. I think with the original pull-request process we basically had a mental block that only one person is allowed to write the code, and the reviewer can only comment. This can be frustrating especially if you know exactly what needs to be fixed. If you just go ahead and make the change, you save both people's time.

I guess one place where it fails is that it's not very educational. If you're tutoring a less experienced programmer, you don't want to fix the mess after them, you want to teach them how to doing without you the next time around. In that case, I either leave the fix to the author, or fix the issue as they watch.


I'm really curious how the ping pong PRs worked out. I've often wanted to do this - both when reviewing code and having code reviewed. Especially for relatively small changes it can be almost as much work to describe a change as it is to implement it and give a little explanation for why.


See the other comment in this thread.


+1 to the other comments about having linting and CI to reduce the nitpicks and basic verification in PRs.

One technique I've used and liked is to have the PR author create a calendar invite for the review, usually with enough lead time to be reasonable, eg the following afternoon. This isn't an old-fashioned "everyone get in a conference room" code review -- you still it do it individually at your machine -- but it allocates the time.

The author puts one or two people that they would like to review as required, and the rest of the team as optional. That helps the reviewers carve out time to get it done, encourages people to RSVP as either reviewing or not, and makes for a well-understood deadline when to get your comments in by if you want to have any influence on the PR.

It's not a perfect system, but might be worth giving a try if you are having problems getting prompt sign-up for reviews.


My experience is more as a roadblock than as somebody getting code live (although I have been held up by my own cursed rules a couple of times). I'd have to say that I really appreciate the calendar approach - it gives me a time slot to think about the code, and more importantly it forces me to stop prevaricating and make a decision so that I'm no longer blocking others


At a previous company, we adopted a system similar to Google's early review process. Every repo would have two owners (for redudancy in case of sickness). We'd put the emails of both owners in an OWNERS file. Then, whenever a pull request would pop up, an automated process would send out an email to the owners. It would continue badgering them, decreasing the period between reminder emails in proportion to how long the pull request has stood outstanding.

This was a great, easy process that helped ensure a timely review for our pull requests.


Interesting, that sounds like it works ok for two owners. I was at a place where I was the code reviewer for C programs[1], and its was every scheduling nightmare and being stuck in a fish bowl that you would assume. This was not distributed, so I would do the review then there would be a meeting where we went over the review. It was not fast development, but it sure was documented. One of two computer related activity that almost got me killed[2].

1) but, interestingly, not allowed to write any since who would review mine? sigh...

2) don't review really bad (MST3K-level bad) C code in a car with two C gurus (one of which was driving)


Our approach is very simple. Each feature or bug needs to be reviewed by one person of the team, anyone with decent tech skills and enough knowledge on the language.

For such a naive approach is fundamental to teach people have to do the reviews. Our approach is try to make people (both reviewers and "reviewees") aware that reviewers are not meant to be judges. They are not meant to try to impose their opinions and ways to do things either. Reviewers can always obviously "suggest" but not mandate.

So a reviewer is expected to:

- Detect any fundamental mistakes that can cause severe malfunctioning like memory leaks, file leaks, bad performance signs and general bad smells.

- Use his or her business knowledge to flag possible missing facts in the implementation.

- Check if there is tests for the feature/bug. If there is no tests then the feature/bug is rejected.

- Check that there is enough tests to cover the functionality and bug that is being addressed.

- Check that the build passes and there is no regressions.

So, for us, the key is to make the tests become the real low level reviewer. And the reviewer needs to focus on higher level requirements instead of going around the code checking every loop and condition.

Honestly, when I do a review, I start from the tests. Those are my docs and my reviewing mates. They tell me if the feature/bug fix is realiable or needs to be reworked. Then I try to use my expertise to help the "reviewee" rather than to judge.

Would love to get feedback.


Hm, I don't believe any senior dev "suggested" a workplace out of bad/unscalable practices. And realistically, if you _review_ my code and give it 3/10, isn't that a judgement?

Part of the problem of bad code is solved by communicating as much as possible what is expected - so everyone knows what 3/10 code looks like. And importantly why 3/10 code is bad, and thus the motivations to create code that is 10/10.

This softer "reviewing" approach could definitely have indirect benefits on how criticism of code plays out. But if a workplace understands and practices respect, then everyone should be able to get on with crafting a healthy code base.

All this is underpinned by one important tenant of professionalism: good communication.


Had similar problems.

Atm our protocol works for us - maybe for you too:

   - It is expected that you do PR reviews every morning
   - It is expected that you wait for 2x +1's ([1])
   - It is expected that every comment is addressed either with a comment or code 
      - that way people can give 2 comments and give +1 and move on
Minor rule: "Code style" discussions are made explicit in code linters and shouldnt (repetitive => linter rule) happen in PRs

Minor rule: If people dont like to review a PR, the PR is too big or not properly prepared (explanations [2], no committer comments, no screenshots, etc) or both

Most important rule: PR Reviews are meant to enable the committer to do a better job - not to "code check"

Hope that helps! GL.

[1]: http://cl.ly/2M3K1I0s380m/Image%202016-04-03%20at%2019%3A04%... (2 days b/c weekend usual cycle time is 24-48h)

[2]: http://cl.ly/1w3s2y0m0T2H/Image%202016-04-03%20at%2019%3A12%...


Thank you to the parent for asking this question. I have some related questions for all engineers and engineering managers.

I'd love to hear your perspective from your current or most recent job:

- Do you think Engineering managers are aware how much time you spend on Code Reviews?

- Do you track your number of / time spent on code reviews? Does your manager?

- Are code reviews valued as much / more / less than writing new code? Why / Why not?

- How are code reviews prioritized against writing code for bug fixes, or implementing new features?

- How do you choose a code reviewer?

- Do you ensure code reviews are distributed across the team? If not, how would you do it? How do you ensure you are not doing too many reviews yourself?


OP here, you're welcone! I'm thrilled at how many awesome responses there are here.

You bring up some really important questions that I think get at the heart of the problems teams often have with code review. Even if it is valued in an abstract way, it isn't really treated like a first class coding task. It's not measured and planned in the same way that writing code is. It seems like a lot of comments here talk about ways to quantify code review and bring it onto more of an equal footing with writing.

On my team we all informally budget time for reviewing PRs, but we don't really plan and track it with any rigor. I don't think any of us have a very good idea how much time anyone else spends on code review, which can make picking a reviewer awkward.

I'm curious to hear other people's responses to your questions.


I'll start by answering based on my last gig (at a startup).

=====

> Do you think Engineering managers are aware how much time you spend on Code Reviews?

No.

> Do you track your number of / time spent on code reviews? Does your manager?

Me: Only informally (might mention it during stand-up). Manager: No.

> Are code reviews valued as much / more / less than writing new code? Why / Why not?

Valued less. Manager did not seem to value code review as a worthy task.

> How are code reviews prioritized against writing code for bug fixes, or implementing new features?

Code reviews are not planned in the work load :(. Engineers have to try to carve out time for code reviews.

> How do you choose a code reviewer?

I usually assign reviews to the engineer who knows the feature best or with whom I have the best working relationship. Sometimes I assign to the engineers who will get to my code review the soonest.

> Do you ensure code reviews are distributed across the team? If not, how would you do it? How do you ensure you are not doing too many reviews yourself?

No. Might be nice to have a team queue for code reviews and the engineering manager making sure that everybody participates in reviews.


We've tried scheduling 2-3 times per day where everyone should focus on PR review. It's quite easy to get lost in one's own tasks. This helps maintain the understanding that everyone needs to work together to move all features/bugs/chores/etc. along.

We also have a slack channel for pull requests, where everyone subscribed gets notified when someone posts a PR for review, or later for re-review.

I think timing and communication can be an issue on any team, especially with distributed resources (I'm remote). We regroup regularly as a team to discuss process, issues, and anything else on our minds. We experiment with solutions, and try again and again until something sticks. It's not always perfect the first time, but we end up with something that works for us. For now, we try to hold each other accountable to our scheduled times, and ping (and re-ping) on slack for reviews.


Like others have said, pre-commit hooks for linters are very helpful. Also, CI is a must.

These alone can significantly speed up code review since you can be fairly confident you won't break anything new. Then, you just have to focus on: "does this fix the issue that was found?"

Then, it's just a matter of breaking up issues into meaningful, modular tasks that can be taken care of without days of code review.


Ask someone to do a review? Why would you ask, other than out of politeness with the clear expectation that the answer "no" should come with a solid business reason? It's their job. If someone isn't doing their job, then management has either not made it clear to them what their job is, or management needs to ask them why they aren't doing their job. To my mind, it sounds like what you need is for management to get their act together, sort out the review process and then enforce it.

I worked somewhere where a reviewer was assigned before code was cut. Once the code was ready for review, the reviewer became the publicly visible person responsible for that particular feature/bugfix/whatever.

People who had reviews sitting on their plate for too long would find themselves being asked questions by management about why they weren't doing their job and why they were holding things up [1]. We wouldn't put up with someone getting round to doing the implementation whenever they felt like it; likewise, we wouldn't put up with someone getting round to doing the review whenever they felt like it. As it was, this never happened, because it's part of the job.

[1] There are, of course, good and bad answers to this question, much as there would be good and bad answers to the same question asked of the implementer rather than the reviewer. The answer "Oh, I just haven't got round to it, I'll get to it eventually" is as acceptable as you probably imagine.


This is a really interesting idea for me. Moving 'ownership' of the code away from the coding engineer makes sense. A similar concept to having automated test cases written by someone different to the implementer.


We have three rules which help here.

1. You cannot start a new task before all reviews have been done 2. You need to talk to the author somehow when reviewing. 3 If you've pair programmed you don't need to review.

The first rule makes people allocate time. The second rule takes focus off minutia and moves it more to the issue or imrovement and suggested changes, including architecture.

The last rule is there since we believe that reviewing is a natural part of the cooperation when pair-programming.


For number 1 did you mean all reviews that have been assigned to you? Because if you mean otherwise then it is impossible to follow once you hit a certain size.


This is one of the things I found most difficult/interesting as an intern.

It's really important to have the local, pre-code-review infrastructure to be able to keep developing and testing in the same direction while a code review is pending. I had very good experiences with Phabricator, which has support for "dependent diffs" so you can create further separate diffs (sort of like PRs) that build on as-yet-unreviewed diffs to avoid requests ballooning in size while awaiting review.

We pester each other incessantly on the team chat to actually do the review. We do cross-functional teams, so there's little dilution of responsibility - only 1 or 2 people are clearly qualified to review your code, and they generally do so in a timely manner.


You can create PRs against branches other than master in github, which lets you create dependant releases. Don't know if that helps at all


Initially we would just have everyone in the team on the PR, but I noticed a trend of only a couple of people actually reviewing pretty consistently and occasionally people waiting for feedback for a couple of days, even for single line changes.

I solved the lag in my team by having "merge masters": dedicated people for that week who will drop what they're working on to review. It rotates on a weekly basis. We do also include the entire team (5 devs + team lead + QA) on the PR and if anyone happens to be free they'll take a look. Our PRs tend to be very thorough and in depth and will often end up with face-to-face discussion as well.


We try to solve it in a couple of ways:

* Enable smaller PRs. Reviewing 50-150 lines of code is much less daunting. This can be done by using tools like feature flags, AB tests, github/scientist etc.

* Ask/communicate about the quality of PRs. Like: splitting big ones into multiple,writing good descriptions, including the problem and screenshots.

* Automate the trivial parts. We use Pronto (disclaimer: I'm the author - https://github.com/mmozuras/pronto), but there are other tools/services that can help achieve the same result.


Excellent tool (pronto), I've been looking for something like this!

Also looked at Hound (https://houndci.com), but it's primarily a paid product, with no easy way of running it ourselves. Pronto seems way better!


We use a standard GitHub process with a standard branching model (master and feature branches) for reviews. We require that each change is a separate PR and unless it's an emergency we require that each change is reviewed before committing. We tried dedicated tools for reviews but found them too much of an overhead.

Unlike some places with strict procedural requirements (e.g. Google with it's OWNERS and readability process) we depend on an honour system which given our size (~30 devs) seems to work rather well. We assume that doing code reviews is an important - if not the most important - thing in a dev's life so we spare no effort in order for the code to be reviewed properly while also not blocking others' work.

However, we heavily rely on tools to ensure that human reviewers spend their time on things automated review can't figure out. These are generally linters and code smells detectors (e.g. reek in Ruby) which run on CI (Codeship). We use these helper tools to push statuses to GitHub and we block merging branches on CI passing so if tests are failing - which might even be just for a stylistic issue - human code review process would not even start. We also use a hosted code quality analyser (codebeat - which is our own product) to track quality over time, keep everyone on the same page and help devs and reviewers focus on the right things.

Overall what we found out is that code review is a learning process and very thorough reviews for new devs help them ramp up very quickly so that subsequent PRs require less and less attention.


It's up to the tech lead/scrummaster to enforce. If your code reviews are too onerous, your feature branches might be getting too big which I think is bad practice for multiple reasons. We do branch per story and keep each story about as small as possible to deliver incremental value. Pull requests should be small and frequent. This leads to easier code reviews, quikcer integration, tighter turnaround with QA and quicker delivery to product owners.


We used to sit down together to the do review. The reviewer set an open timeslot for a possible review, the first who grabs it sits down and together they review it. The calendar entry helps to timebox the review process, the verbal communication speeds it up and helps to avoid conflict.

One more addition, take care of the important bits only, don't fight over coding style or such.


We use phabricator which has a much clearer flow for blocking on code reviews and you can easily see your list of diffs awaiting your review. I find the workflow much better than GitHub. That said, communication is really key, and that comes at an organizational and cultural level to define what's an appropriate amount of nagging you can do to get reviews through.


I used Phabricator at Facebook and I loved it, if only for the macros (memes).


Here's my process:

1. Open a PR when you're ready for review. The PR should have the following attributes, which make PRs fast to review, and reduces the back and forth since the reviewer has everything they need to provide a really good review. - Be for a single "What" and have a clear "Why". Both of these should be explicitly in the commit message. - Preferably be a single commit PR. - Preferably be less than 100 lines of changes. Keep them as small as possible. If you see a refactor opportunity while working on something, `git checkout -b do-the-refactor master` and make it a separate PR. People review +10 -3 PRs eagerly and very very fast, much faster than a +300 -200 PR that contains refactors, fixes and other jibble.

2. Drop the PR into a #channel and immediately switch to doing something else. Your assumption in an asynchronous work environment is that you are an interruptable as anyone else. This means you default to being interrupted and pickup another piece of work rather than expecting someone else to be interrupted.

3. If I don't hear back within 24 hours, assign an individual and ask them to put it in their "queue" of work for doing in the next 24 hours.

4. If there's non-LGTM feedback, then make the changes in a new commit (so it's clearly separate) and after that I jump back on some other work.

The take aways are I context switch in favor of not interrupting my team mates. There are plenty of times I need to interrupt them, for help, code design discussions, etc. PRs are not something I need to interrupt people with.

You might think this process means it takes a long time to merge a PR, and when looking at a single PR that's true. But I'm constantly working on 3-4 tasks at a time, interleaved, and using the above process I can still ship multiple PRs a day.


You're right that the problem is cultural at its root -- there's no one solution to "making" programmers eat their vegetables. But it helps to have many automated reminders, so that the nagging doesn't cause people to hate a single member of the team: when a reviewer is added or a review is updated, your tool should remind the reviewer(s) on a regular basis that new work is pending.

I've been working on a better review tool for teams who have built their process around GitHub pull requests (https://www.omniref.com/code_review), and one thing I've learned from talking with teams is that GitHub's notification system is just comically broken for a team review workflow -- completely overwhelming with emails, but in the wrong way (i.e. a notification for every edit). People just ignore it. So perhaps you should check out a different tool?


> GitHub's notification system is just comically broken for a team review workflow -- completely overwhelming with emails, but in the wrong way (i.e. a notification for every edit). People just ignore it.

Yup. I've found it to be pretty useless for exactly that reason.

Omniref looks awesome. In my experience one of the biggest bottlenecks for getting things done is understanding code bases - wether it's new hires, working with a new repo at your company, or digging into an open source library you use to figure out a bug. I often wish I had better context for why things were done the way they were. Integrating PR comments, and other annotations, into the code, in a way that's easy to browse, sounds incredibly helpful. I will definitely check it out.


Thanks. Let me know if you have any questions or feedback! My email is tim at omniref.


We had exactly the same problem, so we built a bot for our Engineering room in HipChat that pesters the entire room about how many pull requests are currently open, what they are, and links to them, once every 2 hours (during business hours).

It's worked really well. We've also extended that same bot to do other things like remind you about support requests, etc.


We use Jira, GitLab and Slack in our agile teams at work. I've written some automation that listens to GitLab webhooks and (a) announces new Merge Requests in the #team-review channel, which all team members are in, and (b) creates sub-tasks on the original Jira story or bug, which appears on our Jira boards.

But yeah, most of it is about setting expectations. I think our conclusion was mostly that review times under about ~24h are okay, so within that space people can organize their own time.

That said, I have been trying to get my team members to create smaller commits. I'm personally convinced that review time is exponential in commit size, such that reviewing a branch with 20 small commits is much more efficient than one squashed commit. That said, this hinges on actually doing the work to create good commits: no fixup commits, one logical change per commit, etc.

This has come up enough that I think I should write a blog post about it some time.


We have a slack channel for dev and people issue review requests in it. This works best when most people can review the code obviously. There's also an informal culture of trading (you promptly review my branch; I'll promptly review yours.)

We cut some corners and assume devs are ok to address minor complaints w/o further review. So you see "approved pending minor issues X, Y" which doesn't require further review.

We do something akin to pair programming for reviews, though we don't pair program at all. If you can, you sit with the reviewer and discuss. This often leads to a joint effort to fix anything the review brings up and speeds turnaround.

Finally, it helps if you get serious about a linter and, as much as possible, software enforced code style.


We use Gerrit. Gerrit is a review system derived from the Git technology.

From the Git user's point of view, Gerrit is installed as a git remote. When you develop some changes, you push them to the Gerrit remote, using a special branch naming convention. Gerrit dynamically takes this push and converts it to a review item, which is then subject to a web-UI-with-email-notifications-based review workflow.

A special "Change-ID:" item in the commit header identifies the change. This lets you push revised versions of the commit: if the Change-ID matches, Gerrit recognizes a push as being the new version of a previous push. These versions are organized in the review process under one item. You can see what the differences are between different versions and so on.

The review process involves multiple users in reviewer roles. They get notified via an e-mail that there is something for review. Then they can look at the changes, and approve or reject the commit. General comments can be added to the item and comments can be attached to individual lines of code, also, so you can easily say very specific things about specific pieces of code.

We have automatic reviewers who participate also. For instance, if a kernel change is proposed, then "checkpatch" is run on it automatically, and posts an approval or disapproval to the Gerrit item. The code also has to build for several targets and pass automatic smoke tests.

Commits are ultimately merged by users who have the power to do that. They take into account all the other votes and then cherry pick the change.

Gerrit is far from perfect, but it provides a decent workflow, and is a huge improvement over ad hoc development. For instance, the Change-ID can be a pain. It's assigned by Gerrit when you first push something, and then you must remember to add that same Change-ID to the new version of the commit. I wrote myself a script which generates a Change-ID; I stick it into the very first commit, so Gerrit doesn't have to make one up for me.

To use Gerrit, users have to acquire a somewhat better than average understanding of git; it's just a little bit more advanced than the "pull, commit and forget" use cases whereby you pretend that git is SVN or CVS. Things should be set up to prevent common mistakes; for instance, lock the repository from direct pushes, so only pushes to Gerrit are allowed. I would make a blank "Change-ID:" part of the commit message template, and give users a way to generate these things, to avoid the common mistake of pushing a new version of a commit without a Change-ID, causing Gerrit to generate a whole new review item.


All of the problems you mention are completely, 100% solved:

https://m.mediawiki.org/wiki/Gerrit/git-review

You're welcome ;)


The problem is that it doesn't just solve the couple of minor annoyances, but provides a whole new ball of wax.

I don't want to work with anything that automatically creates, switches or deletes branches.

I happily work with Gerrit using nothing but the regular integration branch: very simple.

If I have several independent changes, I just keep them in that same branch as if they were dependent. This does generate some nuisance rebases in Gerrit, but these are not worth the hassle of separating the changes to different branches. If I happen to have a large number of independent changes, what I can do to avoid overwhelming the review system is to not push all of them at once. I can do a "git rebase -i" to re-order the changes in the order in which I would like them approved (say by urgency of the issues to which they are attached). Then, piecemeal, I can push these out to Gerrit, say two or three at a time. When those are approved, push the next batch and so on. This keeps most of the "false dependency rebasing" local to me, not bothering the reviewers.

Gerrit requiring a somewhat better than casual familiarity with git is a good thing; by requiring it, it thereby promotes better familiarity, which makes developers more productive in the use of their version control system outside of the Gerrit context also.

The Change-ID handling is a very minor thing. When making a first commit, I just do ":r!change-id" in Vim, and it reads in the output of my change-id script: a brand new Change-ID: line with a shiny new change ID. My Git commit message template contains a blurb which reminds me to do this. We have a required bug number field also, so this just goes hand-in-hand with that.


I have created literally thousands of reviews with git-review, and I have never once used it to create, switch or delete a branch.

When I clone a new repo, I run "git review -s" (for setup), which installs the commit hook that ensures I never need to give even a passing thought to Change-IDs.

Then I run "git review" to push changes to review. I never have to think about where I'm pushing them because it's all configured in the .gitreview file. I generally also keep all my changes in a single branch (although I also use Stacked Git to cut down on unnecessary dependencies, and because I like it a lot better than "git rebase -i" for many purposes).

That's it. I've heard there are other flags. I've never used them. Nobody is forcing you. But by all means carry on generating Change-IDs manually if you like.


git review to submit, git review -d branch to review.. too bad the ux is so bad


I think you might be confused... (and if not then I certainly am). The -d flag means 'download', and the argument isn't a branch name, it's the review number from the Gerrit URL. You don't use it for reviewing code (that generally happens in the Gerrit web interface, or in gertty), you use it when you want to modify a patch submitted by somebody else. I'm not sure what a UX that lets you check out somebody else's patch without specifying which patch would look like.

FWIW I use Gerrit every day, and I don't recall ever once using the -d flag.


If this is all that's wrong with the UX, why not write two tiny one-line scripts called gerrit-submit and gerrit-review?


We use ReviewBoard, which allows you to tag both a group and specific members of the team with review requests.

I generally tag more than one person, and if possible people who know that bit of the code. I wait until I've had two signoffs (with verbal badgering if people are being slow; we are all near each other in an open plan.)

I will often find myself holding out for the one person who "owns" that bit of the code before pushing the patch, though.

One suggestion that we tried: People agree to do check for assigned reviews first thing, and as they return from lunch. This way people aren't interrupted from their development flow by the reviews.


I've looked at reviewboard and so far have found two major problems with it.

1) You cannot create accounts through the API. This makes it really annoying to test and evaluate.

2) It operates primarily diffs. When you have a patch set to review, you really want to see both the overall diff, as well as the individual commits. Then, when you want to update a review with new commits to address comments on previous round, the tools do not allow to do this without manual intervention. At least I haven't found a way to run `rbtools` for review update in full noninteractive mode. Again, this acts counter to integration.

From (workflow) integration point of view these two are pretty big killers. They are also the two reasons we don't yet have RB even in full-scale testing.

I do want to use something that can import copies of git branches so historical reviews are available without polluting the primary gitlab installation with hundreds or thousands of stale branches.


Hi, Review Board founder here. Let me address these real quick.

1) This has come up before. We'll probably add it. Most places that ask for it really are wanting something like an auth backend that ties into another service, like LDAP/ActiveDirectory (which are supported natively), a Single Sign-On service (which we'll be adding support for in Power Pack, our add-on product), or something more custom. For those who do want to add via the API, it's something I imagine we'll do in the next major release or two. I'll make sure it gets on the roadmap.

2) Review Board 4.0 (which is in development, alongside the upcoming 3.0) features DVCS support that will make it a lot easier to update patchsets and to keep them together as a whole without having to squash commits or create a review request per commit or anything else like that. Reviewers can view each commit individually, or the whole thing squashed, or a range squashed, and we'll help visualize how the history of those commits evolve with each update (such as adding new commits to HEAD, squashing commits together, etc.). It'll be really nice when it's done.

We're also looking at adding push-to-commit through our in-development Git (and eventually Mercurial) wrapper service, rb-gateway, so that you can push a branch there and have it appear on Review Board, and keep updating that branch to update the review request.

And, we're prototyping support for pull requests (first for GitHub, and then for others later), so that one can submit a pull request and have it basically appear in Review Board. This is still in the very early stages, but it's a priority. May not be relevant to your workflow, but thought I'd mention it, because that's asked a lot.

Again, Review Board 4.0 is when a lot of this will make an appearance, but I'll try to get account creation via the API to happen in 3.0. (Just added it to our roadmap.)


FYI, Eric Snow has implemented an OAuth extension for Review Board: https://bitbucket.org/ericsnowcurrently/rb_oauth_extension

We use this in the development of Juju. We also have a bot that creates Review Board reviews from GitHub PRs, and updates the diffs when the PR is updated.


Sounds cool :) I'd love to hear more about your setup if you'd be up to talk about it further (christian@beanbaginc.com).

Native support for OAuth2 is planned to land in 3.0 as well. We're focusing a lot on improvements for third-party integrations and enhanced extensibility with this release (along with some cool new review improvements).


Oh I should be clear (misunderstood that extension). We're adding support for using Review Board as an OAuth2 provider, not to use another service for auth, so that services can better integrate into your server without having to share credentials.


Oh hi, this was an unexpected pleasure.

My usecase for item 1 is a bit special, I can admit. The pain point came up when I tried to set up RB for testing, and interestingly enough both items tie directly into each other.

- Reviews need to be generated automatically when suitable trigger conditions are met. This means that there has to be a dedicated user account in RB that can create reviews on behalf of other users. Reviews also need to be automatically updated when new changes land in the branch under review.

- While setting RB up, it's not at all uncommon to do a round of install/provision/repopulate iterations. Without the ability to generate users and assign their permissions through an API the automation and iteration go out of the window.

- The account permissions for RB are very granular. I don't want to manage these permissions via a web-UI, I want them populated and updated based on reviewable commits. Hence, everything must go through config management. Yet another web-UI for managing yet another set of accounts and permissions is ... increasingly repulsive.

I can understand where the permission and account creation friction comes from. For you, RB is a product. (Hell, it certainly is one!) For me, RB is simply a tool, and as such it has to integrate with other tools. Code review tool is an extremely important one, but without fluid integration with other development workflow tools it becomes an additional burden to maintain. For developers who don't need to worry about integration efforts, yet another extra account adds to the cognitive overhead.

Furthermore, I realise your permission model is tightly coupled with the underlying architecture, which itself builds on top of Django. Exposing the kind of functionality I ask for requires to expose and nest quite a few internal API layers. I do genuinely see why that has not been roadmap item so far. Codewise that is a big undertaking.

As for item number 2... you just made me drool. I haven't seen a tool yet that would allow to easily see change comparisons with dynamic interdiff windows. Having visibility into how a change set has evolved, and what the compound intermediate results have been between any given steps would be a huge boon. (No, `git diff ref1..ref2` is not ideal. On the other hand something like `git show ref1 ref2 ref3..ref6 ref7..ref8` gets a lot closer.)

Once you have native DVCS support and push-to-commit for fluid integration between other tools, RB has the ability become "just another tool". I see that as a good thing :)

I haven't yet tried phabricator (evaluating it is on our infrastructure team's roadmap), but so far it looks like it tries to do too much.

You guessed correctly, the PR integration is not relevant for me, but I can definitely see how it would better integrate with lots of teams' workflows. No wonder it gets asked for.

> Again, Review Board 4.0 is when a lot of this will make an appearance, but I'll try to get account creation via the API to happen in 3.0. (Just added it to our roadmap.)

Thank you. Thank you so much.

NB. I am a big believer in tooling and workflow. Related to that, I have developed an increasing intolerance to anything that cannot be configured via config management. Changes to infrastructure are no different to changes in code - everything needs to be reviewable and versioned. (Which also means that the infrastructure must be rebuildable with minimal manual steps.)


Hey Phabricator dev here, would love to see you take a better look at Phabricator as well. I'm not sure that we really "do too much" as our product is guided mostly by the tenants of code quality and developer efficiency. Specifically we feel it better to have these tools all integrated so developers spend more time just building features. Some basic overview for code review would be:

Differential - Pre commit code review

Audit - Post commit code review

Arcanist - command line, patch emitting lint, unit tests (pre commit)

Herald - Custom "business rules" for notifications (cc me on all JS diffs)

Owners - Mark code territory

Harbormaster - Continuous integration

Macro - I mean, code review with memes!

And yes, you can uninstall any application you don't want like Maniphest or Countdown, or sadly even the meme generator.


Oh hi, and apologies for the slightly late reply.

> would love to see you take a better look at Phabricator as well.

I will. Phabricator was brought into my attention some time ago, not surprisingly around the same time I was looking at the alternatives. I think I should expand on the "does too much" part a bit, because in using that expression I clearly managed to upset you.

Let me state one thing upfront - good engineering workflow is critical. That is also a reason why introducing new tools to existing workflows is so damn hard. And that is precisely why I am looking at tools that integrate into, and enhance the existing setup.

Flag days make everyone unhappy.

From looking at the existing documentation and "pitch pages" for phabricator, I've had the constant feeling of getting overwhelmed. From the list of features you mentioned, I for one don't really see much of a difference between distinct Owners and Herald modules. If code review system has the concept of marking code ownership, then of course automatic notifications for incoming changes in that code should be standard practice.

But the moment CI gets implemented as core module of a code review system, I think it veers off into murky waters. You can't easily smoketest different CI systems. They are tightly coupled with release and deployment tooling, and carry the need for their own notifications. CI is a damn complex beast. I would rather have a very good CI with plugins for the extra functionality than a code repository / review system with a CI slapped on top. If you're going to integrate a CI into code repo (gitlab, I'm looking at you), then you have to aim really high. As in - if you find a regression, and commit a test for it, I would expect the integrated CI to be able to spin off an autonomous bisect run to detect exactly when the regression was introduced.

But let's get back to the subject at hand. The few things I feel are core to a really good code review system:

- Allows to automatically review feature branches, and keeps track of all the changes and comments locally; once the review is completed, the branch can be nuked, but the review remains as a historical item and contains full history

- Default reviewer groups. If a change set touches multiple parts of the code, the relevant people across teams ("owners", in phabricator I think) should all be marked as automatic reviewers.

- Related to the point above, the ability to automatically mark a changeset with "I'm looking at it" if you're doing the review in the web-UI. That would provide visibility for others in the reviewer group.

- Mail in support. Most people get code review comments delivered via email. It should be possible to reply to such a mail, and have the comments from the mail integrate back into the review thread visible in the web UI. If I could use my preferred mail client (mutt, in my case) to respond directly to code review threads, I would be quite a bit happier.

- Some kind of alerting mechanism for "problematic" reviews. If a change set takes ages to review, it usually means that either the change is too complex or too large - or possibly that the reviewer is overwhelmed.

- Again, only slightly related to the previous step: if you are going to integrate CI into a review system, it would be beneficial to see in the under-review section when a change has landed in master that would cause a merge conflict. The probability increases with code age, after all.

And that's just off the top of my head. I mentioned in the sibling comment that I realised RB is built as a product. That comes with its own baggage. In case of Phabricator, I see a an even more complex product. It appears to have so many tentacles that even Lovecraft could have been envious.

There you go.


You're looking at GitLab and we're listening to you. We're aiming high with GitLab CI, the CI system on the planet or bust. Your suggestion for automatic bisecting is really interesting and we discussed it. The problem is that we can't see a way to do this without making it language dependent, this is not annotation for us.


Hey, thanks for the detailed feedback! It's always great to hear about use cases and what people need. Helps us shape things.

Something that's been core to our product's philosophy for a long time is to be extensible and to fit in with other tools. We're doing more to help there in 3.0, but we already have a very fleshed-out API and extension framework used to add on new features and capabilities, API, etc. The lack of an API for user account creation is clearly a pain point for you. I know I said we'd get that into 3.0, but I think I'm going to have to take that back.

I'm going to try to get it into the next 2.5.x instead.

> - Reviews need to be generated automatically when suitable trigger conditions are met. This means that there has to be a dedicated user account in RB that can create reviews on behalf of other users. Reviews also need to be automatically updated when new changes land in the branch under review.

You probably know this, but in case, or for anyone who doesn't, there's a permission that allows users to post or edit review requests on behalf of another user. I'll make sure that can be set in the API.

> Furthermore, I realise your permission model is tightly coupled with the underlying architecture, which itself builds on top of Django. Exposing the kind of functionality I ask for requires to expose and nest quite a few internal API layers. I do genuinely see why that has not been roadmap item so far. Codewise that is a big undertaking

Fortunately, large parts of this are already written for RBCommons.com (our Review Board SaaS) for our own use, and we should be able to port much of it over.

> Once you have native DVCS support and push-to-commit for fluid integration between other tools, RB has the ability become "just another tool". I see that as a good thing :)

It should be very cool :) Still a lot of work to get where we want to go, but it'll happen. Much of it already has, just not in a stable form yet.

> I haven't yet tried phabricator (evaluating it is on our infrastructure team's roadmap), but so far it looks like it tries to do too much.

I can't speak to their product. I believe we have different philosophies. Many tools choose to be like GitHub and do a little bit of everything under one product (which can be good if you aren't using other tools, or want to consolidate them all). We opt to focus solely on code review capabilities, trying to innovate in that area and plugging into other services/tools already being used. Pros and cons to both, but I certainly have my preference :)

Thanks again for the feedback! If you ever have more, or want to talk more about this outside Hacker News, definitely reach out! We have a dev-related list at https://groups.google.com/group/reviewboard-dev/, which I pay a lot of attention to, and others might have some good thoughts as well.


We use it on a totally different toolchain: Mercurial, and MQ patches[1] (which are like temporary commits that you can reorder and change). Within that workflow it works really well; you really are editing the patch and resubmitting.

I didn't have to set it up of course, it's used by everyone in a large (~100 devs) organisation.

[1] https://www.mercurial-scm.org/wiki/MqExtension


Our team has a similar bottleneck. A few weeks ago I attempted to solve the problem during the static showdown hackathon, but I haven't touched the project since the hackathon ended so it's pretty rough:https://ss16-obelus.firebaseapp.com

The thing we wanted was a news feed of sorts of outstanding PRs so we could track who checked them off and which ones needed our attention first.

I guess it wasn't a big enough pain point to continue developing on it :(

Also I think a whole web app is a clunky tool for the job, something like a chrome extension might be more appropriate.


The problem I often run into is people making a bunch of comments but not having the confidence to actually make a call (ship or no ship). I think this has roots in the idea of code review as a way to cover your ass when something goes wrong.

This would be nice: As a reviewer, your responsibility is to say whether the code is shippable or not; the remaining content is simply justification

As a submitter, you determine who needs to see your code, and this is a flexible thing - if you paired with the people who would be reviewing your code anyway, maybe getting approvals from all of them is unnecessary, etc.


You could consider turning the process "on its head":

Do release branches instead of a peer reviewed master branch.

So you would have every developer continuosly commiting into a (CI'ed) main trunk and when you wanted to do a release you would carefully review / quality assure the changes between the last release and the main trunk, taking only the stuff ready for release into new release branch and moving fixes back into the main trunk.

This approach will give you a messier main branch (rather than messy development branches) and move the reviewing and QA till the time of release.


The problem is that there can be a lot of lag between asking someone to review the PR and them actually doing it -- yes, I have definitely ran into this issue at my previous job, some PRs take forever to get merged because everyone else is so busy working on their tasks and nobody has the time to do reviews.

The process at my current job: we have a soft limit on how many tasks can be in the work-in-progress status at a time, that includes tickets that are currently being worked on, or being reviewed. Then there is a hard limit of 3 issues at a time that's test ready. When we exceed these limits, developers need to stop pulling in new tickets, and our priority becomes to help with review or test issues, that helps moving things along. Instead of assigning your issue to a specific person on the team, it is the team's responsibility to move things along and get these reviews done, that eliminates constantly bugging the same person for reviews.

Since my current job is front-end heavy, we also do preliminary QA when we're reviewing code, we pull the branch in and make sure the obvious aren't breaking. I personally find it really helpful for understanding the context of a PR and gives me a clue of what I'm trying to look for when reviewing. Of course this process is still not perfect, there can still be a lot of back and forth, and sometimes you need to ask people to explain some of the context to you. But overall I find this a fairly decent process so far, most importantly I feel like it gives everyone the sense of team work, no issue should be the responsibility of a single person.


At one time I was part of a team of around 15 developers working on a PostScript engine, mostly written in C, and included hundreds of test scripts. It shipped on a number of multi-function printers. We started on it around 2002. Today we might have used Git. We used http://aegis.sourceforge.net, sadly stuck at SourceForge for the time being.

Aegis is notable for (by default) recording and enforcing code and test script review before permitting a commit. It has other virtues, and faults, and is no longer widely used, AFAICT. Reviews were treated as transactions, and the reviewer comments went into the project history. Also time spent on development, reviewing, resuming development after 'failed' reviews, and more.

The engine is mentioned in passing on https://www.eid.toshiba.com.au/n_news_detail.asp?news_id=8. It did a number of other page description languages as well.

We also used Aegis on a number of smaller projects.


Interesting. You're the first person I've seen here that used it. It was the only one that meets most of Wheeler's requirements:

http://www.dwheeler.com/essays/scm-security.html

So, was it a good tool compared to subversion or centralized git? Work fast and reliably enough? Not get in the way too much outside authorization checks? Just getting feedback on field use here.


I don't think I know enough about subversion or centralised git. Aegis was great for the first few years. For the main project, there was a perception that it was slow and unreliable. But a lot of us loved the transactional features and preservation of the timelines of comments for code review. Its management of test scripts was very very valuable.

I'm not sure about your authorisation checks question. Maybe Aegis did get in the way.

An afterthought about speed and reliability - Aegis is almost too configurable. For instance, it relies on a 'file history tool' per project; that can be SCCS, RCS, FHist, or anything that you can tell Aegis how to use. We used the somewhat obscure FHist. It has the advantage that it was written by the same person as Aegis, and so is extensively tested with it.

Our project had the main repository sitting on an NFS server, and in later years around a dozen working on one or two changes at any one time, with those stored on each engineer's Linux workstation. That was slow and sometimes fragile, but we never really explored other confiration options.

We accumulated hundreds of unit test scripts which had to pass on each integration. In retrospect we should have worked harder to retire or refactor the oldest ones. Having fewer or faster tests would have saved time and angst.

Aegis' implementation is very dependent on the Unix security model. We thought about doing something to make it work on Windows, but quickly got discouraged.

I don't know much about security, but I'd guess you'd be able to discover many.

Sadly, Peter Miller, the originator and I guess owner of Aegis, died a few years ago. User and developer mailing lists were on the web last time I checked.


Thanks for the feedback. Interesting. The UNIX dependence was one of my main gripes. I had some workarounds but figured a clean-slate model or retrofit of cross-platform SCM was better. I still wanted to eventually meet and get insight from the author on such a project. Too bad he died. (sighs)

Back to clean-slate or retrofit of alternative as I figure Aegis might be too complex to use for such a project if main mind behind it isn't available. Still, they get credit for doing a lot of things right.


Yes. I might suggest you give one of the tutorials a try. He as several, depending on which tools you're more comfortable with.

Creating a small project seemed to be quite a bit of work, and I think Peter realised this and put effort into the examples.

You might also like his PDF on recursive Make, http://aegis.sourceforge.net/auug97.pdf.


We use Jira to track development tasks, and have two states for review: Awaiting Review and Under Review. Every branch (and thus every PR) has a corresponding dev task. We configure Jira so it lights up the Review column in red when there are too many items in it. It's a fairly unsubtle hint that items in the column need to get reviewed before moving on to the next dev task.

We also use Stash (now Bitbucket) rather than Github, so we can configure it to trigger Jira state transitions.

Depending on how features get factored, you can end up requiring certain tasks as pre-requisites before other tasks can get started. Then there may be direct communication / nagging involved. But we try and cut things up a little bit differently, so that there's less scope for lag to impact overall feature delivery time. The main idea is to prevent PR reviews from being blockers for multiple people.


We had a similar problem with lag for code reviews, so I wrote a bot to nag the Hipchat (moving to Slack this week) channel and remind people that repos had pending PRs that needed review. This helped quite a bit of getting code reviewed.

Keeping PRs smaller definitely helped as well and now we typically have 3-5 PRs per day go in.


A former colleague of mine wrote a great article on this topic which summarized how the infrastructure group at Twitter thought/thinks about code review: http://glen.nu/ramblings/oncodereview.php


The company I work for is using microservices, I usually I try to focus on how the app behaves, and make sure the UI works & there are no bugs. I do also read the code itself to make sure there's no blatant security issues, but I try to focus my review on the app's behavior more than it's code, since a microservice can be easily rewritten later if any problems come to our attention. If the person writing that app chose to copy/paste instead of use inheritance, I'll assume they have a good reason. Code quality is important, but no one even cares about it besides the developers. Usually I'll also ask the developer if there's any parts of the code they'd like constructive advice on, that way I don't come off as if I'm lecturing them when I do critique their code.


I find that reviewing code is a pain. Not just for me, but for the submitter too if I take too long / I am excessively nitpicky about code style issues. I don't think good reviewing tools exist right now. You can rope together multiple tools which require constant babysitting, but there's no good low maintenance all-in-one solution (or if there is, please suggest it!)

Anyway, given that caveat, here are my tips:

(1) Have a "checkpatch" script that can be run over the code before it is sent. This script should do all the nitpicky coding style stuff. If the submitter didn't check their patch, you immediately reject it (some projects automate this). Here is an example from the qemu project: http://git.qemu.org/?p=qemu.git;a=blob;f=scripts/checkpatch....

(2) Have something like patchwork, or github pull requests, or openstack gerrit reviews, which tracks which submitters have submitted patches, which is the latest version of their patches, and so on.

(3) Have some automation which runs the test suite against submitted patches and immediately rejects any which fail.

(4) Have documentation which tells people how to submit patches, what scripts to run, and encourages submitters to split their changes into reviewable commits. This is more art than science, but good coders will eventually get it. Here are qemu's instructions: http://wiki.qemu.org/Contribute/SubmitAPatch

Now when you review a patch:

(5) Apply it to your tree and make sure it compiles. Enable warn=error so no warnings can escape your notice.

(6) If it comes from a trusted submitter, and looks sane, I generally ACK those without a detailed review.

(7) If an earlier version of the patch was sent, go back and look at your comments, then check everything that you asked to be fixed was fixed.

(8) Go through each commit, line by line, and check everything. This is the tedious bit I'm afraid. But hopefully the previous steps will have rejected the worst mistakes.

(9) Try to be level headed and provide constructive criticism (even if the patch is objectively terrible). Also try to turn around patches quickly (I'm very guilty of not doing this, so do as I say not as I do).


What code review tool do people actually use for a PR-based workflow? My company has been recently looking for better code review tools than Bitbucket's.

A few key features I'm looking for:

- Incremental review: show what files has changed since last review (ideally what lines)

- Timeline view: See what have happened since you last reviewed it

- Custom "PR Approved" rules + indicator: e.g. 3 approvals + build passed. This gives us an easy way to see whether a change is ready to be merged and released.

A lot of these features can be found in (what seems to me to be) trunk-based, pre-commit style tools like Phabricator and Gerrit. However I'm not sure whether the extra tooling (i.e. ramp up cost and complexity) is worth the return. Does anyone have any experience as to how switching over yielded good gains?


If you're willing to use GitHub, then https://reviewable.io has all the features listed above and more.

Disclosure: I'm the founder.


reviewable is probably my favourite at the moment which ticked all my boxes, though it involves switching over to github which may or may not happen.

Do you have any more advanced example of real code reviews using reviewable? Seems like there isn't a easy way to find more advanced examples (I found one particular one for Rust, but I can't seem to find a way to navigate to other code reviews of the same project)


I should do a better job of tracking "interesting" reviews, but for now the easiest way to find some is to poke around the PRs of some open source projects that have adopted Reviewable, for example https://github.com/servo/servo/pulls, https://github.com/cockroachdb/cockroach/pulls, and https://github.com/dolphin-emu/dolphin/pulls. Each PR will have a link to the corresponding review in its description.

Let me know if you need anything else!


Thanks for the links! Having a list of cool PRs definitely helps conversion :)


We used PR based review before, but we found it is hard and inefficient when many changes are required. Its hard to see whether comments is being addressed in Bitbucket PR. We switched to Phabricator recently, and the incremental reviews really helps.

I think you can go over your previous PRs, and see how many of them has multiple rounds of reviews. If most of them are minor changes/fixes, then switching maybe unnecessary.


Thanks for your input. There definitely is a lot of back and forth and I consider it a very good thing. It's interesting because it's a chicken-and-egg problem - if your tool is painful to use, then team members are less likely to participate seriously.

(What I'm really wondering though is whether incremental code reviews are specific to the pre-commit model. In theory I don't think this is the case since tools like Upsource has it as well)

Anyway, what was your experience migrating the team to Phabricator? What's the general workflow from starting a new feature to getting it to prod? Cheers


I find that timely feedback is essential to the pr process working effectively. A few things can help: * submit RFC prs that are not intended to be merged early so architectural or design issues can be socialized early * break your commits into logical chunks * open an epic branch and merge smaller commits into it. When ready, open a new pr to merge the epic into master.

This does sound like a culture issue. Everyone should be empowered in this process and everyone needs to be involved. I think of it a check-and-balance, but we all have a responsibility to complete features to a high standard, in a timely manner. If a feature is delayed because reviews are slow, we're all responsible.


If you can't reduce PR size, then at the very least you can make the process less painful by splitting the work up with a tool such as https://reviewable.io/


I wrote some software that counts the number of comments made on PRs on GitHub, and then posts the results to our company slack channel: https://github.com/JesseAldridge/code_review_scoreboard

I think that having visibility into who's actually putting effort into code reviews is the first step toward getting everyone to put in more effort.

Note that I'm the only one currently using the above software, and it works on my machine... but it will likely take a bit of work to make it work on yours.


1. Read the requirements/expectations document

2. Read how much of the code is reused

https://en.wikipedia.org/wiki/Pareto_principle


Shameless plug: I've created a tool which helps engineering teams and their managers surface actionable metrics out of their GitHub-based pull requests. Basically, it surfaces some metrics about pulls which have been open too long or which are too large to be effectively reviewed.

I'd love to get some feedback. You can find it here: https://www.pipelineinsight.com.


We (Lyst) recently shared our code review guidelines on GitHub:

https://github.com/lyst/MakingLyst/tree/master/code-reviews

We've tried to find a balance being keeping momentum and making sure you're shipping good quality code.


10K-LOC just wrote an article on this titled Effective Code Reviews:

https://10kloc.wordpress.com/2016/04/03/effective-code-revie...


Pull request , one line changed -My team has a fairly has a fairly standard github +My team has a fairly standard github


Haha, nice. Also a few lines later "has anyone else run I to similar problems..." should be "has anyone run in to similar problems..." But HN won't let me edit the text now so it's too late. All I can do is cringe at the irony having a couple of typos in a question about code review. I wrote the question on my phone from bed this morning, and I guess I wasn't fully awake.


Pair programming or integrated code reviews in development workflow are preferred.


4339734


My team uses Fisheye + Crucible. We have an expectation that outstanding reviews will be handled end of day and beginning of day (some people work staggered shifts, so they may have reviews awaiting them when they start in the morning.)

Additionally, we will do reviews for each other as we return from lunch or on a break between issues, etc. as possible for each person.

Revisiting a revised review is handled the same way. If I put up a review and there are defects found that need to be handled, I'll handle those and then add the resulting commit(s) to the review and let the team know (though they will check their Crucible inbox when they get a chance anyway.)

If a review is holding other things up we can be pests about it if necessary, but it's up to the other devs to decide when they have time to take a look. It's not up to me.

Since the reviewers and reviewees are all on the same team together we are all motivated to take care of reviews. If you're not all on the same team, or your team doesn't have a healthy "We're all in this together." that could be problematic.


Development tools ultimately are there to help the developers.

For teams with a mix of abilities, something like gerrit can make sense where approval is needed before a merge to avoid total chaos.

For a group of experienced developers, approval is a very expensive delay, and merging should be allowed first and review done in parallel.

Where I work the only people who commit are experienced developers, so the whole +2 gerrit thing is rather pointless, especially since jenkins does syntax-checking.

You might spot the occasional "typo" in a commit, but are you really going to tell the local language expert or framework author they're wrong about anything else, when your opinion carries the same weight as theirs?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: