Hacker News new | past | comments | ask | show | jobs | submit login
Using checklists for code review (rbcommons.com)
48 points by trowbrds on April 24, 2013 | hide | past | favorite | 25 comments



It sounds minor, but I think it's pretty important: The points on his checklist aren't clear yes/no questions, they're a bunch of things you should remember to consider.

Specifically, how do you check "Correctness of algorithms"? What is the criteria to pass that? "Well, I think it's right", of course you do, otherwise you wouldn't have committed it. Formal proof? Unit tests?

Items phrased like this are ripe to become a "looks fine, check" style process-theatre that doesn't actually add anything.

Credit where credit it due, "Are controls laid out in a way that makes sense given visual scan order?" is a bloody good one.

Checklists can be very helpful to make sure you remember to consider everything in the right order. I have a phone screen checklist that includes "introduce yourself" because sometimes I forget and that's embarrassing. At a previous job, we had a "stakeholders that might be affected" checklist used in feature planning (we had gotten into a bad habit of building features that affected the performance profile of the app, causing ops to get paged on the weekend because they hand't been told to expect different behavior, also: removing an arcane feature that we had wrongly assumes nobody used).

One thing to remember when comparing with flight or medical checklists: These people deal in repeating almost the same thing as consistently as possible. Software engineering is rarely like that.


I think even a woolly item like "Correctness of algorithms" is helpful - "why is he using bubblesort for umpzillion items?"

It's not really meant as a yes/no; it's meant to prompt the code reviewer (not you!) into checking the algorithms for insanity. Easy to forget that when you're rushed and hassled and wanting your lunch.

Also, aren't the majority of flight checklists for abnormal situations? And specifically to make sure that things are handled in a sane and sensible fashion when everything is going off klaxon-wise and/or on fire?


The developer should absolutely follow the checklist as well, otherwise he's wasting the reviewers time.

So, what you're describing is actually a good yes/no question: "Is the best algorithm for the problem used?" (or, better: "Has functionality that is available in a library been implemented by hand?", but that's not your point :) )

No, flights use checklists for even completely routine thing (especially for routine things, because that's where sloppiness will manifest itself first). For an example, if you forget to set flaps correctly before you hit the throttle for take off, you're going to have a bad time, so that's on the checklist. (Of course, these things are largely automated now, so modern aircrafts will yell at you if you don't. I think Airbuses will even override the throttle)


> "Is the best algorithm for the problem used?"

Rather: "Is the algorithm good enough?"


There's a nice list of checklists at

http://www.projectcheck.org/checklists.html

including a "checklist for checklists" (!)

http://www.projectcheck.org/checklist-for-checklists.html

Great article on the use of checklists in hospitals, by Atul Gawande:

http://www.newyorker.com/reporting/2007/12/10/071210fa_fact_...


Gawande's book on Checklists is excellent but I suspect more of an expansion of the NYR[1] article than a standalone work.

(Offtopically, "Complications" is also worth a read.)

[1] how do you abbreviate The New Yorker?


As much as people feel negatively about Java, it really has fantastic static code analysis tools. PMD, FindBugs, CheckStyle are all very helpful when properly configured. They're basically on the checklist level.

I think checklist level thinking for code reviews is bad though. It ends up being a pretty soul destroying experience when someone tells you that you need to validate arguments to all methods as not null (for the 100 you wrote), or that you should take the data in your test cases and put it in a text file. Or that you've re-used the same string in several places and should make it constant. Robots can tell me that, and I can get their feedback while I'm developing or choose to ignore it. People are invaluable for helping you step back and see how to re-organize your code, introduce new abstractions, re-interpret requirements, and see actual bugs that no QA person or tool would ever discover (likely affecting some unlucky user who would hit an extremely rare bug that couldn't be reproduced easily). Those types of reviews are invaluable. Put down the checklist and automate it instead.


To be fair, there are similar tools for most languages.

One of the most comprehensive tools for doing automated quality reviews of code is Sonar ( http://www.sonarsource.org/ ). It supports pretty much all programming languages (thought not all for free).

For example, for PHP code sonar runs a combination of PHP_CodeSniffer, PHPMD, phpDepend and phpunit and integrates the reporting of all those tools.


As much as people feel negatively about Java, it really has fantastic static code analysis tools.

My first experience with Java was at a job in high school... and my only reference was a book that weighed half as much as I did. Needless to say it left me with a very, very poor impression of the language, which I'm only recently learning is very much wrong.

I definitely prefer writing Python, but the more I actually learn about Java, the worse I feed for being such a senseless detractor for so many years.


The checklist motivates the automation. This advice is for people who have neither, to get started.


Some of these can be caught automatically, and the more automation (taking the human out of the loop), the better, because humans can gloss over a checklist or accidentally skip something. My current project uses a combination of Gerrit, Jenkins and Sonar, along with unit/regression tests, so that every single change is checked automatically to make sure it builds and passes the tests, along with requiring at least one human review and okay it. Combine this with code checking tools (vera++, rats, cppcheck, cpplint, flawfinder, pscan, valgrind, fuzz, along with just about every warning available on two compilers), and we hardly ever have to do anything but look at code in Gerrit and approve it.


Side discussion: why do you guys think software like Gerrit, Jenkins and Sonar, software that is used by many folks and mostly praised, is written in Java (instead of say, Python)?


Honestly? Don't know, don't care. And yes, we have arguments about the pros and cons of programming languages all the time. I'm guessing that Java started down this road of appearing more "professional", and that went some ways towards people actually wanting to prove it, so things like accountability and reproducibility started getting serious consideration in mainstream programming. And it's not like everything is Java; git, vera, cpplint, valgrind are all not written in Java.

To be honest, the only reason we are using C++ is because it's mandated by the project, and quite frankly, it's what we're good at (we'd probably be working another C++ project if this one wasn't in C++). And yes, we know C++ is a kludge; quite frankly, I wouldn't trust a C++ "expert" who didn't question design decisions of C++.


We've tried using checklists a bit for code review, and hit the problem that our checklists would be pretty much guaranteed to have at least one item that was arguably inapplicable to the changeset. So you'd have some changeset that was very small, or not contain UI elements, or did contain UI elements, or was related to build tools, and so on.

And so we'd get into a situation where we'd have to decide whether a particular item was even applicable or not, which somewhat defeats the point of checklists, since you're supposed to be able to decide immediately whether an item is satisfied or not.


"Not applicable" or "0/0 UI elements are correct" is a fine response to checklist.


I like it. A checklist is a very lightweight and efficient way for a team to codify a list of objective things that they feel must be done and checked for every pull-request. Once that's done, it should be considered good to merge regardless of peoples subjective opinions about the code. This avoid drawn out and opinionated debates about the code dragging down productivity. That isn't to say a bit of debate isn't a good thing, it just need not be the main thing.


I have yet to see a code review checklist that I find useful. There are just too many things that can go wrong with software, so the list ends up either too long to be practical, or too generic to be helpful.

What I do find useful is to have a good variety of skills on the review team. Different people tend to check for different things. Some are just naturally good at spotting, say, concurrency problems, others have a pet peeve with names that aren't clear, etc. But someone who doesn't have an eye for concurrency problems won't magically start noticing them in a pile of code because there's a box saying "check for race conditions" somewhere on the list.


I just think its rather silly to do code reviews this way unless you're writing software to power a space shuttle (ie stuff that should never fail/break), It would simply take too long.


i disagree. fundamentally the point of these checklists is to standardize the sorts of things you evaluate during the code review. if feedback is inconsistent, you end up seeing different standards applied to several parts of the codebase which can (and does) manifest itself as an inconsistent product.

the overall appearance of the checklist does make it seem like it would be problematic, but i think the author makes a good point of showing that they act as a template for the reviewer rather than as strict set of guidelines.

in a situation where "we're all consenting adults here," having a set of things to evaluate features or code style makes it easy for anyone to know what to expect when checking in a change or when reviewing one for the first time. it may not work for you, but it certainly adds consistency when they're used non-dogmatically.


Exactly.

Forcing a checklist upon all reviewers isn't what the article is all about, but rather it's about one developer's way of organizing his own personal process for doing code review so he doesn't forget something.

If I have a huge change in front of me, and I know that I tend to look for certain things, I could certainly try to look for all those in one go. That's error-prone, though. Might forget something, or get bogged down in syntactical stuff and miss something important. However, with a little personal checklist in front of me, I can go "Okay, I'm done looking at syntax. Now let's make sure that there's no off-by-ones." Leads to better organization of thoughts.


Lists in general are great for organizing thoughts. Being able to check items off the list helps focus thoughts and offload the overhead of keeping track of progress to something better suited to the task.


Couldn't a checklist also be the tests written for the feature? This ensures that future revisions don't muck with the already working system.


Some things are hard to write tests for. Basically most things where the criteria is "does it look right"


Checklists are OK, but beware the checklist. They tend to cause complacency. "It wasn't in the checklist!"


"It wasn't in the checklist!"

"Well now it is, and there's a unit test for it, and you won't be able to commit code that breaks that unit test. Problem solved."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: