Hacker News new | past | comments | ask | show | jobs | submit login

I'm actually here because my colleagues keep coming up with things posted here and on other sources and I was intrigued as to the consensus behind them. I'm pleasantly surprised to read that there is a nice critical balance of discussion.

The worry for me is that there is almost always an obvious detachment from the steps carried out to the benefit which is the precise definition of a cargo cult. A lot of people can't seem to see this at all and when asked for some sort of rational proof, something I do regularly, tend to switch to a personal or ideological attack instead.

It's slightly tiring to be fighting the front of rationalism in an industry which appears to be driven partially by fashion.

As for pair programming, 100% anecdotally I find that sometimes you need a second set of eyes on a problem when you can't see the forest for the trees. But mostly it's inefficient and unproductive and absolutely crippling for small teams.




I agree with your criticism against pair programming. I also agree that it's helpful with an extra pair of eyes on anything you write, which is why I'm strongly in favour of code reviews.


...although some people might argue that you could get the same value from a rubber duck ( https://en.wikipedia.org/wiki/Rubber_duck_debugging )


I want very different things from my rubber ducks and my code reviews.

Rubber ducking is great for forcing yourself to articulate your train of thought, spelling out your assumptions, and proof-reading your own logic.

Code reviews are great for catching errors you glaze over because you're too familiar with what you think you wrote to see what's actually written. It's also a great way to onboard people and enforce code standards and best practices.

Even when you're in a more senior position code reviews can often teach you new things, when you interact with systems the reviewer knows better and they happen to know a better approach.


I do this. It works. I have a stuffed parrot though. And everyone thinks I’m nuts but I don’t care :)


I despise code reviews, because the criticism is shallow, and offered too late, after the work has been done. I want the review in real time, and at a greater depth of insight. This is what pairing provides.


Shallow and too late criticism sounds like a problem with how the reviews are handled.

At my previous job reviews were required before you were allowed to check in, meaning if it wasn't good enough you had to rework it. This meant junior engineers were quickly brought up to speed with code convention and best practices. If you didn't adhere to them your code wasn't approved.

I also found it a great way to catch subtle bugs or difficult to follow logic. But it does require actually having and taking the time to carefully review things, rather than giving them a passing glance.


Another helpful technique to good code reviews is to keep individual PRs small. Once a PR is over a few hundred lines, my eyes start to glaze over and by the end I'm just skimming.

I've had great success with breaking larger changes into smaller PRs that get code-review-merged into an integration branch. Then the integration branch gets merged to your main branch once everything's done and verified.


Yes, I also have great experiences with this type of approach.


Logically, any review criticism is too late, because the code was already written! Catching the issue before commitment is great, but the proper time for the feedback was during the actual writing. Now the code must be sent back for rewriting, and time has been wasted.


Catching problems before things are written is definitely better and that's why it's often a good idea to mention how you intend to approach problems in your daily sync. If someone has concerns, they can bring it up.

But if that fails, it is definitely better to review and catch it after the code is written than to submit badly written code out of fear of wasted effort. In my experience the vast majority of tasks are easily addressed by a single person and most proficient programmers will usually produce perfectly adequate code which doesn't raise any concerns. Having reviews to catch the odd lapse in judgement or logical misstep doesn't usually add a lot of overhead (as opposed to allocating two programmers to every menial task) but saves a lot more time and effort in avoided bugs and issues than the cost of having to rewrite something every now and then.

Additionally, any single submit should be kept to a manageable size so that they're easier to review and issues are caught early. The only times I can remember having insisted someone rewrite something non-trivial have been when they misunderstood some fundamental issue, did not communicate progress as they should have, and did not follow guidelines of solving a single task per submit. That type of behaviour is in itself a pattern which needs to be addressed.


> But if that fails, it is definitely better to review and catch it after the code is written than to submit badly written code out of fear of wasted effort.

Yes, without a doubt.

> Having reviews to catch the odd lapse in judgement or logical misstep doesn't usually add a lot of overhead (as opposed to allocating two programmers to every menial task)

Having paired for years, and soloed for years, I have found that the downsides of allocation are overridden by the benefits of pairing, one of which is real-time code review.


I have done some pair programming. At my first job we sometimes did it when some problem proved especially tricky. Typically a tricky bug.

In some cases it's invaluable, absolutely. I just don't think that's a large percentage of the code written on a daily basis. For most tasks pairing seems to be either

1) one person tapping out simple code which practically writes itself while the other lazily nods along.

2) One person solving something tricky which happens to be their area of expertise while the other stares blankly. Maybe at that point it would be useful to explain things in detail, but that can make development take much longer and if something like this requires rethinking halfway it can be quite taxing. In that case, I would prefer waiting until it's done and compiling the result as an annotated review. Which can be a great forum for asking questions.


When I've seen it adopted, it was in the context of XP. The red-green-refactor cycle affords a decent flow for pairs, who may switch off between test writing and feature writing. I can see that without the other XP practices, pairing may be awkward.


If the code requires such a substantial rewrite after review, that would be indicative of serious communication problems on the team. In 20-odd years of reviewing code, I can't think of many instances where a review led to a substantial rewrite.


I don't know what "substantial" means but, say, in a 3 day long ticket, surely all sorts of issues found in review can easily force another day or so of work.


Anything over half a day would be substantial, but really I'd consider any post code review fix that takes longer than an hour to be a red flag that we're not communicating effectively.

As I alluded to earlier, I've never encountered these sorts of problems in the healthy companies I've worked at (but I HAVE encountered them at unhealthy companies).


Code reviews sometimes seem like a micro-management tool: they often boil down to quibbling over things that are quite superficial, while simultaneously ignoring or missing big problems that lurk under the surface. Without some sort of automated linting and code style enforcement, this is what happens in code reviews. They serve as a reminder to the developer that his work is not trusted, and someone else must keep an eye on him at all times. This is very bothersome to people who have been around the block a few times, and are already capable of producing robust, clean product (this is often a pitfall of having someone who's just out of college reviewing code from an experienced dev).

Another problem I've observed in code reviews is that they turn into impromptu redesigning sessions. Most devs are quite opinionated, and code reviews often become an avenue to engage in bikeshedding that can actually harm morale, not to mention velocity and product quality.

The trouble with pairing is that it doesn't remedy these problems effectively, while simultaneously reducing the number of hands doing work. If two people just don't get along, pairing them is not going to magically solve the problem. And I guarantee that there will always be people who simply do not see eye to eye on many important things in most sizable teams.


Yeah, code reviews (like any other process) can be done wrong.

The way to propose a change, in a "good way" (I am assuming here that no one has unilateral capability of merging to master, and things thus have to pass through review), to my mind, involve doing a few things.

<ul> <li> keep the change small ("small" here is a very fungible quality, but "probably less than n * 1000 lines") <li> does NOT mix "fix bug" / "add feature" / "refactor" (arguably, there are cases where feature-adding and bug fixing are instrinsically linked, let those through) <li> has a change description detailing the why of the change (the "what" is the change itself) </ul>

As a reviewer, your task is not to critique the design (that is why everyone agreed on a design doc up front, no?). It is OK to review the structure of the code, though.

Hopefully, you have a "run this through a linter / style-checker" integrated into your pipeline, so that should be pretty much already done. But, if there are a few things left over, it is OK to point those out.

It is OK to point out that things that should be (but aren't) tested should have tests. If you can spot an edge case that isn't (or doesn't seem to be) tested, ask for that to be tested, also.

People need to be permitted to take the time to do a review.

With these things in place, a review should be pretty quick (at the pace of up to a few lines per second), and most of the time frictionless. But, it does require some discipline, both from the writer and the reviewer.


I have mixed feelings about code reviews. It seems like sometimes they're an opportunity to show off so every review _will_ find something insignificant; sometimes code reviews are rubber stamps and just not useful. There's a balance though where you get concise thoughtful responses.

It seems like there is something about pairing though that guarantees a focus on shallow insignificance. I don't like dismissing something that has smart proponents but I just can't find a way to make pair programming anything other than negative. The real time aspect rules out depth.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: