Hacker News new | past | comments | ask | show | jobs | submit login

The cause is usually the same and it is looking for things you can spot vs proving the code is correct.

Looking for things you can spot means you read the code as if it was novel, and you discuss only the things that stand out or that seem to be important.

Many people interpret "doing the code review better" as reading the novel more slowly and spotting more things that stand out.

Imagine bridge engineering plans were reviewed this way. Somebody would look at the numbers and react only if for some reason the numbers looked incorrect.

Asked to do the review more thoroughly -- they would just stare at the plans for longer, but never actually performed the calculations, looked up the tables, consulted standards, etc.




Approaching from the mentality of doing a proof is valuable for establishing anything about code, even when it isn't security-sensitive. I think many programmers do this instinctively, and some don't do it at all. You can spot the difference by asking a question like,

"Why do no records get dropped in this step?"

Some programmers will have an argument resembling a logical proof: "No records get dropped because for every record, we call a method to write the record into the outgoing artifact, and if that call fails, the entire stage aborts. The flow of control never reaches the point where the outgoing artifact is added to the database for delivery. The lack of a successful delivery record for the batch triggers an alert after an hour and ensures the system will keep reprocessing the batch until it succeeds, or until the batch gets manually flagged as bad."

Other programmers will say something like: "Well, all the records get passed in, and we write them all to the artifact. We had some bugs in this code, but they haven't been hard to fix, and anyway, if something goes wrong, the process is retried later."

This has been a big thing on my mind recently because I've been asked to do a design review for a system created by some high-SLOC-productivity people who built a complex system with a sophisticated-looking architecture but who keep giving me answers like the second one. There's a bunch of cool tech but no coherent explanation of why they they believe the system will work reliably. It's just, well, there have been some bugs, and of course it doesn't work when there are bugs, but we're fixing them.

Until you have an argument for why a system works, there's no coherent way to understand why it doesn't work, and what the effort will be to achieve different aspects of reliability. It's just "bugs." With such an argument, you can target specific aspects of reliability instead of playing the "it'll work when the bugs are gone" eternal game of whack-a-mole.

Granted, this isn't very rigorous, but 99% of the time, it's the closest you'll get to "proof" in application code.


> This has been a big thing on my mind recently because I've been asked to do a design review for a system created by some high-SLOC-productivity people who built a complex system with a sophisticated-looking architecture but who keep giving me answers like the second one.

I've recently found myself in the middle of one such team, except without the high productivity part. Sometimes I think I'm the only sane person in a sea of mad, and at other times I wonder if maybe I'm just dumber than I think and don't understand.

I've designed and built systems that get deployed around the world, but if you look at my code and architectures, they're relatively unsophisticated. I just don't see the need for sophistication unless it's solving a specific problem, and I find these highly sophisticated designs just make everything harder (for me anyway).


the diff-based code review emphasizes very local changes to code. it inherently directs one's attention toward what has changed, rather than how the changes affect the larger system. I notice this a lot with the (browser-based) tool we use at work. if I expand the view to see the whole file, rather than just the diff, it tends to make the page lag.


That's why I never accept code based on review in web tool unless the code is evidently correct. I usually check out the branch and try to figure out how the code affects or is affected by other, sometimes distant code.

Ideally, of course, everything should be hidden behind leak-tight abstractions.

Unfortunately, that is rarely the case.

I recently had a developer reuse a function that generates IDs for some other type of ID.

That function that generates IDs happened to use OS entropy pool directly. It was used before, once every startup of the application so it was deemed OK.

But now with high demand for IDs that is no longer acceptable.

I noticed it because I followed the code to see how the function actually generates IDs. That would not happen in web based tool because there simply isn't convenient way to follow the references from the new code.

That is just a simple example, I get things like that every other day.


I've never been a fan of any review tool I've used because they all assume the code changed stands in isolation. Just today I was in a review where a change on line 234 was correct, except that it violated a bad assumption on line 123 which needed to be fixed. Line 123 is where I wanted to comment because that is where the problem should be fixed, but the tool saw that as okay code and didn't even let me see it, much less comment one it. It gets worse when the bad assumption is in a different file.

And I often get frustrated because I see you call foo(), but I need to see how foo works with the given arguments.

Though to be honest, most of the time I catch my self playing grammar checker, if the style isn't violated and you didn't make any of the dozens of checklist items you pass (I work in C++, so I often catch people writing c++98 code when there is a c++14 construct that is better)


I've been pushing our engineers to avoid using GiHub's web UI for PR review. Checkout the branch, open it in the IDE, and walk through the code paths. IDEA has a great way to do GitHub PR line comments from the IDE, others likely do too.

I've seen more bugs than I like OK'd because the single line change looked innocuous in the web UI review tool.

Tools like Pull Panda are worse still, for game-ification of PRs, with leader boards for most PR reviews and fastest reviews. :(


Haven't heard about "game-ification" of PRs, but even at first glance this idea sounds horribly misguided.

Just as rewarding developers for lines of code written, rewarding them to mark off as many lines of PR as quickly as possible is just so totally moronic I can't begin to comprehend who and for what reason would think that may be a good idea.


When I get a 300 file code review (each file with an average of 100 lines changed) on me I start to wonder if maybe we do need to gamify it just to reward people for breaking it up. Particularity when every single file is making the same set of mistakes (ie every function is prefixed with a 10 line header describing the name, and the types of the parameters - redundant with the name and type info in the one line definition), which makes it harder to find the actual changes of interest.


If you get 30k LoC PR you have a problem anyway.

Did the author manually modify 30k LoC or is this result of some automated process?

If it is for example a refactoring run from IDE (which is completely fine in itself), then the better way to handle it is to separate that single huge change into its own commit that explains how the change was generated.

For example, I insist that there are three categories of changes: reformats, refactors and functional changes and that commits can only contain single type of change and that it must be immediately apparent what type of change it is. So if you see 2k LoC reformat you skim over it and don't expect functional changes to be hiding.

On the other hand if the code is actual manual modifications it means somebody has spent a huge amount of work on it and it is not unreasonable that Code Review will take proportionally huge amount of work.

My longest Code Review took me a week and was for much smaller change in LoC than that.

There are probably better ways to handle a change like that. My preferred way is to pair up and assume that if two people actually worked on it, it is as good as a code review.


I agree with everything until your last paragraph. The people who did this massive thing over several weeks (I exaggerated the size a bit) have a lot of experience, but we don't trust them. They might have paired (though in these pandemic days I'm not sure how well that went), but the pair isn't seen as very good.


That is just a simple example, I get things like that every other day.

And for every one you notice, you miss three. Because the system is too complex. This is why we need to program defensively.

It was used before, once every startup of the application so it was deemed OK.

And that's where someone went wrong. There should have been a test, or runtime failure, or even better (but more difficult) a compilation failure if someone misuses something in that way.


>> It was used before, once every startup of the application so it was deemed OK.

> And that's where someone went wrong. There should have been a test, or runtime failure, or even better (but more difficult) a compilation failure if someone misuses something in that way.

I don't subscribe to the idea of creating so much additional code for a relatively simple constraint.

Code should be dense. Dense means a lot of important constraints implemented in small amount of code.

Of course this is not yet guarantee that the code is readable and easy to maintain, but as a general rule, a codebase that has 5 times the amount of code to implement same functionality will more likely than not be less maintainable. That is because it is just more difficult to work with that much code.

> There should have been (...) or even better (but more difficult) a compilation failure if someone misuses (...)

In this particular case what I suggested was to move the original function with generic name from a generic utility package to a specific module package with specific name.

See? Compilation failure, not difficult at all.


I don't subscribe to the idea of creating so much additional code for a relatively simple constraint.

Code should be dense. Dense means a lot of important constraints implemented in small amount of code.

Sorry, but I can't believe how wrong this is. "Dense" is a really poor metric for code quality. "Simple" is a much better metric. And one aspect of "simple" is "not interwoven".

So the question here is, could the new code I'm advocating be implemented in a way that is simple and not interwoven? That is, not adding complexity to any of the rest of the code base. It is nothing more than a safer leaf.

And the answer is, in this case, yes. This does not increase the complexity of the code base. The amount of code added is commensurate with the need to make sure it's not a foot gun.

Clever naming, or visibility restriction, is a poor solution. You may prevent the function from being called explicitly more than once. But you don't prevent future refactors from leading to the function that calls that thing being called more than once.

The crucial bit of humility that is necessary is to realize that there's no way you can foresee how these problems will come up in the future. The only thing you can do is be defensive.


> This is why we need to program defensively.

I agree wholeheartedly and I wish more developers felt this way.

I hate opening new projects and seeing the potential for null reference exceptions everywhere when you could instead just check the constraints directly and give a more meaningful response.


Github's PR review tool has some irritating limitations this way. If a changed function could cause problems for an unchanged caller, there's no way to add a comment on the caller unless it's within a few lines of the changed code. You cannot add comments on specific file lines that are "too far" from a changed line.

My review flow now is to check out the branch under review, read it in IDE with git annotate on, and do all my actual review work there so I can see the full context. Then add line-by-line comments on Github where I can. Comments relating to older parts of the system that should change in tandem with the current PR have to go in as "general" comments, with file/line number included manually.


Something that really cheeses me off about Atlassian (bitbucket) code reviews is that by default they only show you the changed lines.

You can't see that the code they changed also lives in another function that they missed, or than in fact the whole thing should be factored out into a function and fixed once.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: