> Mantras like "methods should be shorter than 15 lines of code" or "classes should be small" turned out to be somewhat wrong.
These hard rules may be useful when trying to instill good habits in juniors, but they become counterproductive when you start constraining experienced developers with arbitrary limits.
It’s really bad when you join a team that enforces rules like this. It almost always comes from a lead or manager who reads too many business books and then cargo cults those books on to the team.
This is the bane of my existence at the moment after ~20 years into my career, and it frustrates me when I run into these situations when trying to get certain people to review pull requests (because I'm being kind, and adhering to a process, and there is really valuable feedback at times). But on the whole it's like being dragged back down to working at a snails pace.
- Can't refactor code because it changes too many files and too many lines.
- Can't commit large chunks of well tested code that 'Does feature X', because... too many files and too many lines.
- Have to split everything down into a long sequence of consecutive pull requests that become a process nightmare in its own right
- The documentation comments gets nitpicked to death with mostly useless comments about not having periods at the ends of lines
- End up having to explain every little detail throughout the function as if I'm trying to produce a lecture, things like `/* loop until not valid */ while (!valid) {...` seemed to be what they wanted, but to me it made no sense what so ever to even have that comment
This can turn a ~50 line function into a 3 day process, a couple of hundred lines into a multi-week process, and a thousand or two line refactor (while retaining full test coverage) into a multi-month process.
At one point I just downed tools and quit the company, the absurdity of it all completely drained my motivation, killed progress & flow and lead to features not being shipped.
Meanwhile with projects I'm managing I have a fairly good handle on 'ok this code isnt the best, but it does work, it is fairly well tested, and it will be shipped as the beta', so as to not be obstinate.
After 20 years of doing this, I’m convinced that required PR reviews aren’t worth the cost.
In the thousands of pull requests I’ve merged across many companies, I have never once had a reviewer catch a major bug (a bug that is severe enough that if discovered after hours, would require an oncall engineer to push a hot fix rather than wait for the normal deployment process to fix it).
I’ve pushed a few major bugs to production, but I’ve never had a PR reviewer catch one.
I’ve had reviewers make excellent suggestions, but it’s almost never anything that really matters. Certainly not worth all the time I’ve spent on the process.
That being said, I’m certainly not against collaboration, but I think required PR reviews aren’t the way to do it.
The point of code reviews isn’t to catch bugs. It’s for someone else on the team to read your code and make sure they can understand it. If no one else on your team can understand your code, you shouldn’t be committing it to the repository.
Maybe. But then, sure, I can understand the code you wrote - on a syntactic/operational level. This adds Foos to bar instead of baz, and makes Quux do extra Frob() call. Whatever, that's stupid stuff below junior level. What would actually matter is for me to understand why you're doing this, what it all means. Which I won't, because you're doing some code for symbolic transformation of equations for optimizing some process, and I'm doing data exchange between our backend and a million one-off proprietary industrial formats, and we only see each other on a team call once a week.
I'm exaggerating, but only a little. Point is, in a deep project you may have domain-specialized parts, and those specialties don't overlap well. Like, ideally I'd take you aside for an hour to explain the 101 of the math you're doing and the context surrounding the change, but if neither you nor me have the time, that PR is getting a +2 from me on the "no stupid shit being done, looks legit code-wise; assuming you know your domain and this makes sense" basis.
HN moment. I’ve never seen in practice that someone says ”I don’t understand it” and the author says ”good point, I will simplify it”.
Rather, the opposite. I often saw people make unnecessary complex or large PRs that were too much workload to review, leading the reviewer to approve, on the grounds of ”seems like you know what you’re doing and tbh I don’t have half a day to review this properly”.
Code review is a ritual. If you ask why we have it people will give you hypothetical answers more often than concrete examples. Personally I’m a proponent of opt-in CRs, ie ask for a second pair of eyes when your spidey senses tell you.
Our juniors write horribly complex code that senior devs have to ask to simplify. This happens all the time. And the juniors simplify and thank us for teaching and mentoring. It’s a big reason we do reviews. So we can control how dirty the code is before merging and so we can grow each other with constructive feedback. Sometimes it’s also just “LGTM” if nothing smells.
90% of comments in my team’s PRs come with suggestions that can be applied with a click (we use GitLab). It requires almost no effort to apply suggestions and it’s often not much extra work for reviewers to explain and suggest a concrete change.
I agree that reviews should be used pragmatically.
Get (or create) better colleagues. It's usually pretty easy to identify if people are approving pull requests that they don't understand. Pull them aside and have a professional talk about what a pull request review is. People want to do good, but you have to make it clear that you value their opinion.
If you treat the people around you as valuable collaborators instead of pawns to be played to fulfill your processes, your appreciation for reviews will transform. Remember that it's their work too.
> I often saw people make unnecessary complex or large PRs that were too much workload to review, leading the reviewer to approve, on the grounds of ”seems like you know what you’re doing and tbh I don’t have half a day to review this properly”
That just seems like company wide apathy to me. Obviously you have to make an effort to read the code, but there are lots of ways developers can overcomplicate things because they were excited to try a pattern or clever solution. It doesn't make them bad devs, it's just an easy trap to fall into.
These should not pass a code review just because the code "works." It's totally acceptable to say "we're not gonna understand this in 3 months the way it's written, we need to make this simpler" and give some suggestions. And usually (if you're working with people that care about the workload they make for others) they will stop after a few reviews that point this out.
We've done this at our company and it's helped us immensely. Recognizing whether the code is unnecessarily complex or the problem is inherently complex is part of it, though.
I watched pre merge code reviews become a requirement in the industry and catching bugs was almost always the #1 reason given.
The times I've seen a 2nd set of eyes really help with the understandability of code, it was almost always collaboration before or while the code was being written.
I would estimate something like 1 out of 100 PR reviews I've seen in my life were really focussed on improving understandability.
Wow someone who finally has this same unpopular opinion as I do. I'm a huge fan of review-optional PRs. Let it be up to the author to make that call and if it were really important to enforce it would be more foolproof to do so with automation.
Unfortunately every time I've proposed this it's received like it's sacrilegious but nobody could tell me why PR reviews are really necessary to be required.
The most ironic part is that I once caught a production-breaking bug in a PR while at FAANG and the author pushed back. Ultimately I decided it wasn't worth the argument and just let it go through. Unsurprisingly, it broke production but we fixed it very quickly after we were all finally aligned that it was actually a problem.
To catch stupid mistakes like an extra file, an accidental debug flag, a missing compiler hint that has to be added to migration scripts etc.
To ensure someone who doesn't quite understand the difference between dev and production build pipelines doesn't break it.
To ensure a certain direction is being followed when numerous contractors are working on the code. For example a vague consistency in API designs, API param names, ordering, etc.
To check obvious misunderstandings by juniors and new hires.
To nix architect astronauts before their 'elegant' solution for saving a string to a database in 500 lines gets added.
To check the code is actually trying to solve the ticket instead of a wrong interpretation of the ticket.
To get introduced to parts of the codebase you haven't worked on much.
But as with anything you get from it what you put in.
None of those are good reasons why PR reviews are necessary. They are examples of things that it's theoretically possible a PR review might catch. But there's no information there about how likely those things are to be caught.
Without that absolutely critical information, no cost benefit analysis is possible.
In my experience across many companies, PR reviews almost never catch any of those bugs or bring any of those benefits.
If you worked with me it would be worth doing mandatory PRs.
One trick, and I'm not being sarcastic, is to read every line. Even if you don't understand the change as a whole that catches things.
Another trick, and again not sarcastic this is genuine advice. Read every line of your own PRs before you submit them. It's surprising how much I catch doing that. Same with git commits. It's also noticeable which of your colleagues don't do this as their PRs are the ones with obvious mistakes.
All of this is much easier and more effective with multiple PRs per day, and breaking bigger tickets into smaller one.
If you're constantly doing big, multi-day commits you're doing development wrong. You lose a lot of the benefits of source control and the tooling around it.
I still do big commits sometimes, especially when refactoring, but even then it's very important to keep those big commits focused on a particular change. If I find bugs or tweak a feature I've been meaning to change, I try and make a new branch and push that to main and then rebase my feature branch off main. Or cherry pick them out before the refactor PR.
> Read every line of your own PRs before you submit them
I do. Everyone should. I’m also a fan of small focused PRs.
> If you worked with me it would be worth doing mandatory PRs.
Given the number of PRs I’ve merged and the number of mistakes that reviewers have caught, I think it’s very unlikely that you’d catch those things at a frequency high enough to justify the cost.
I’m not doubting that you can find those things or that you have found them. But again I have merged thousands of PRs with hundreds of reviewers across numerous companies in multiple languages and I have never had a reviewer catch a major bug.
That’s a large enough sample size with no effect at all, that I’m going to need hard evidence to make me believe that people are finding these things at a high enough rate to justify the cost.
>Unfortunately every time I've proposed this it's received like it's sacrilegious but nobody could tell me why PR reviews are really necessary to be required.
Required PR reviews means that if someone steals your credentials, or kidnaps your child, you can't get something into production that steals all the money without someone else somewhere else having to push a button also.
It's the two-person rule, the two nuclear keyswitches.
This is definitely not why PR reviews are required. Most companies don't really know why they require them, but I've definitely never heard one say it was because they were afraid of malicious code from stolen credentials.
There's so many other ways you can inject malicious code with stolen credentials that doesn't require a PR in every production environment I've ever worked in. There's much lower hanging fruit that leaves far fewer footprints.
SOC2 doesn't require code reviews. SOC2 is just a certification that you are following your own internal controls. There's nothing that says required PR reviews have to be one of your internal controls. That's just a common control that companies use.
I would argue that "common control that companies use" falls under "industry standard" and I would say it would make it harder to pass certification without PR reviews documented on GitHub or something alike. So it does not require but everyone expects you to do so :)
The reason that this is common is that a company hires a SOC2 consultant who tells them that PR reviews are required despite that fact that this is a complete fabrication.
Locking yourself into an enormously expensive process with no evidence of its efficacy just because you don't want read up on the process yourself or push back on a misinformed auditor is a terrible business decision.
Well "Peer Review" or "Code Review" is required - pull requests are easiest way to have it all documented with current state of art tooling. Otherwise you have to come up with some other way to document that for purpose of the audit.
I agree with you. If you give each dev a kind of sand-box to "own" within a project they'll learn to find their own bugs, write both simple and robust code, lots of param checking — grow as an engineer that way.
>Allowing anyone to promote anything to production without any other eyes on it is problematic.
In my experience the people who are promoting things to production that shouldn't be will find a way to do it. They'll either wear down the people who want to stop it, or they'll find someone else to approve it who doesn't know why it shouldn't be approved or doesn't care.
My hypothesis is that requiring any 2nd random engineer in the company to approve production code doesn't provide enough value to justify the cost.
There may be other controls that are worth the cost.
However, our industry has been shipping software for a long time without this requirement, and I've seen no evidence that the practice has saved money, reduced the number of bugs, or improved software quality by any other metric. I think it's time we examine the practice instead of taking it on faith that it's a net benefit.
>Not realizing this is extremely telling.
Nice way of saying, I don't agree with you so I must be an idiot.
but there isn't actually a second set of eyes because the second set of eyes you're thinking about is complaining about formatting or slamming the approve button without actually looking
Yes, same for QA sometimes.. dev sets bar lower as the QA can test it. Just makes a bunch of back and forth. And when stuff breaks nobody feels responsible.
> I have never once had a reviewer catch a major bug
Just in 2024, I've had three or four caught[0] (and caught a couple myself on the project I have to PR review myself because no-one else understands/wants to touch that system.) I've also caught a couple that would have required a hotfix[1] without being a five-alarm alert "things are down".
[0] including some subtle concurrency bugs
[1] e.g. reporting systems for moderation and support
I'm one of the rare individuals who really tries to review code and leave helpful comments. I've been on the receiving end of really big PRs and can say I understand why you're being told to break things up into smaller chunks.
Most of the devs who submit large PRs just don't have a good grasp of organizing things well enough. I've seen this over and over again and it's due to not spending enough time planning out a feature. There will be exceptions to this, but when devs keep doing it over and over, it's the reviewer's job to reject it and send it back with helpful feedback.
I also understand most people don't like the friction this can create and so you end you with 80% of PRs being rubber stamped and bugs getting into production because the reviewers just give up on trying to make people better devs.
When I review code I never think I am there to make people better devs.
I’m reviewing the code because I don’t want shit code merged into the code base I am responsible for operating. I’m going to be the one debugging that. Don’t just merge shit you feel like merging.
I don’t have your experience but I personally think some of this feedback can be warranted.
> Can't refactor code because it changes too many files and too many lines.
This really depends on the change. If you are just doing a mass rename like updating a function signature, fair enough but if you changing a lot of code it’s very hard to review it. Lots of cognitive load on the reviewer who might not have the same understanding of codebase as you.
> Can't commit large chunks of well tested code that 'Does feature X', because... too many files and too many lines.
Same as the above, reviewing is hard and more code means people get lazy and bored. Just because the code is tested doesn’t mean it’s correct, just means it passes tests.
> Have to split everything down into a long sequence of consecutive pull requests that become a process nightmare in its own right
This is planning issue, if you correctly size tickets you aren’t going to end up in messy situations as often.
> The documentation comments gets nitpicked to death with mostly useless comments about not having periods at the ends of lines
Having correctly written documentation is important. It can live a long time and if you don’t keep an eye on it can becomes a mess. Ideally you should review it before you submitting it to avoid these issues.
> End up having to explain every little detail throughout the function as if I'm trying to produce a lecture, things like `/* loop until not valid */ while (!valid) {...` seemed to be what they wanted, but to me it made no sense what so ever to even have that comment
I definitely agree with this one. Superfluous comments are a waste of time.
Obviously this is just my option and you can take things too far but I do think that making code reviewable (by making it small) goes a long way. No one wants to review 1000s lines of code at once. It’s too much to process and people will do a worse job.
> This is planning issue, if you correctly size tickets you aren’t going to end up in messy situations as often.
No, it’s “this refactor looks very different to the original code because the original code thought it was doing two different things and it’s only by stepping through it with real customer data that you realized with the right inputs (not documented) it could do a third thing (not documented) that had very important “side effects” and was a no-op in the original code flow. Yea, it touches a lot of files. Ok, yea, I can break it up step by step, and wait a few days between approval for each of them so that you never have to actually understand what just happened”.
so, it's not just a refactoring then; it's also bug fixes + refactoring. In my experience, those are the worst PRs to review. Either just fix the bugs, or just refactor it. Don't do both because now I have to spend more time checking the bugs you claim to fix AND your refactoring for new bugs.
The most common IME are bugs that come from some wrong conceptual understanding underpinning the code. Rewriting the code with a correct conceptual understanding automatically fixes the bugs.
And there are multi-PR processes that can be followed to most successfully convert those changes in a comprehensible way.
It'll often include extra scaffolding and / or extra classes and then renaming those classes to match the old classes' name after you're done, to reduce future cognitive load.
One metric I like to give my team is to have any new PR start a review in less than 15 minutes and be completed within 15 minutes. So, the longest you should wait is about 30 minutes for a review. That means teams either go "fuck it" and rubber stamp massive PRs -- which is a whole different issue -- or they take it seriously and keep PRs small to get their PRs reviewed in less than 30 minutes.
In most cases where I see responses like this, they're not surprised to wait hours or days for a PR review. In that case, it makes sense to go big, otherwise you'll never get anything done. If you only have to wait half an hour, max, for a PR review; the extra code churn is 1000% worth it.
As a developer, I want my PRs to actually be reviewed by my coworkers and to have issues caught as a second layer of defense, etc.
As a reviewer, I effectively stopped approving things I couldn't give at least a cursory, reasonable glance (and tried to encourage others to follow suit because if we're not reviewing things, why not just push directly to main).
As a consequence, I have:
* tried to review most things within like half an hour of their announcement
in the shared MR channel
* requested a pair programming session and offered to do a pair programming
session for any large and semi-or-fully automated refactoring session,
like running a linter or doing a multi-file variable rename
(the pair programmer immediately comments on and approves the MR when it
appears)
* tried to limit my PRs to approximately 400 lines (not a rigid rule)
There were some specific instances of people not liking the "you must pair program if you're going to touch 400 files in one PR" requirement; but otherwise, I would like to think those on my team liked the more regular PRs, more people doing the PRs, etc, that resulted from this and some healthy culture changes.
I would also like to feel like the more junior devs were more willing to say anything at all in the PRs because they could follow the change.
I’ve seen this and variations done by teams to implement the metric. Usually, the “biggest” friction comes from “how do we know a PR needs to be reviewed within the time frame?” To which I always want to answer: “you have a mouth, put noises through it.” Sigh, sometimes I miss the military… anyway, toxic behavior aside, this is usually the biggest thing. I have to remind them that they go get coffee or smoke at least every hour, but rarely at the same time; so maybe then might be a good time to just do a quick check for an open PR. Or turn on notifications. Or if it’s urgent, mention it in the dev team channel.
But yeah, it’s hard to get the culture rolling if it isn’t already in place nor has anyone in the company worked with a culture like that.
I'm all for low-latency reviews, but this target seems crazy: a perfect recipe for a lot of apparent activity for little actual progress. Maybe it depends on the project, but for a lot of projects 15 minutes of review time means you basically are only going to accept trivial changes.
As it turns out, most of the work that most developers do is updating or enhancing CRUD apps. There's already a plan and an intent that just needs to be typed out.
I've found 15-30 minutes to be plenty of time to review about a day's worth of code. It's enough time to process what the code is doing and iterate over the tests, in general.
Here's a scary thought: if something small takes 15-30 minutes to appropriately process ... how much longer do *large* changes take? Can someone keep all that in their mind that whole time to comprehend and process a huge change?
> 15 minutes of review time means you basically are only going to accept trivial changes.
Um, yes. This is 100% the point. There is no amount of refactoring, bug fixing, or features that cannot be expressed as a chain of trivial changes.
What you usually see happen is that instead of spending a week experimenting with 15 different refactors, is that an engineer opens a PR with what they think they're going to try first. Other engineers point out how they had tried that before and it didn't work; but maybe this other way will. So, they end up "working together" on the refactor instead of one developer getting lost in the sauce for a week seeing what sticks to a wall.
In essence, about the same amount of time is spent; but the code is higher quality and no architecture reviews during code reviews (which is another rule that should exist on a team -- architecture reviews should happen before a single line of code is touched).
> only by stepping through it with real customer data that you realized with the right inputs (not documented) it could do a third thing (not documented) that had very important “side effects” and was a no-op in the original code flow
sounds like the 'nightmare' was already there, not in the refactor. First step should be some tests to confirm the undocumented behaviour.
Some of your complaints seem to be about peer review ('approval'). I found my work life improved a lot once I embraced async review as a feature, not a bug.
As for 'break it up step by step' - I know how much I appreciate reviewing a feature that is well presented in this way, and so I've got good at rearranging my work (when necessary) to facilitate smooth reviews.
I've found processes like this to work better, too. Basically, the one big pr is like building a prototype to throw away. And the benefit is it has to get thrown away because the PR will never pass review.
Yes, though it does depend on how good the commenting system is; and, for something like that, you're still probably going to want a meeting to walk people through such a huge change.
And you'd better hope you're not squashing that monstrous thing when you're done.
I do object to the notion of something being a planning issue when you're talking about a days worth of work.
Implement X, needs Y and Z, ok that was straightforward, also discovered U and V on the way and sorted that out, here's a pull request that neatly wraps it up.
Which subsequently gets turned into a multi-week process, going back & forth almost every day, meaning I can't move on to the next thing, meanwhile I'm looking at the cumulative hourly wages of everybody involved and the cost is... shocking.
> Implement X, needs Y and Z, ok that was straightforward, also discovered U and V on the way and sorted that out, here's a pull request that neatly wraps it up
This sounds very difficult to review to be honest. At a minimum unrelated changes should be in their own pull request (U and V in your example).
I work as a tech lead, so I get a lot of leeway in setting process. For small PRs, we use the normal “leave comments, resolve comments” approach. For large PRs, we schedule 30m meetings, where the submitter can explain the changes and answer questions, and record any feedback. This ensures everyone is on the same page with the changes, gives folks a chance to rapidly gather feedback, and helps familiarize devs who do not work in that area with what is going on. If the meeting is insufficient to feel like everyone is on the same page and approves the changes, we schedule another one.
These are some of the best meetings we have. They are targeted, educational, and ensure we don’t have long delays waiting for code to go in. Instead of requiring every PR to be small, which has a high cost, I recommend doing this for large/complex projects.
One additional thing to note on small PRs: often, they require significant context, which could take hours or even days, to be built up repeatedly. Contrast that with being able to establish context, and then solve several large problems all at once. The latter is more efficient, so if it can be enabled without negative side effects, it is really valuable.
I want my team to be productive, and I want to empower them to improve the codebase whenever they see an opportunity, even if it is not related to their immediate task.
One minor piece of insight from me is about release management vs pull-requests.
As you say it's much easier to schedule a 30 minute meeting, then we can - with context - resolve any immediate nitpicks you have, but we can also structure bigger things.
'Would this block a release?'
'Can we just get this done in the PR and merge it'
'Ok, so when it's done... what is the most important thing that we need to document?'
Where the fact that even after it's merged, it's going to sit in the repo for a while until we decide to hit the 'release' button', this lets people defer stuff to work on next and defines a clear line of 'good enough'
How do you rework a core process, then? If you rework a major unit that touches just about everything... Sharding something like that can break the actual improvement it is trying to deliver.
Like... Increase the performance of a central VM. You'll touch every part of the code, but probably also build a new compiler analysis system. The system is seperate to existing code, but useless without the core changes. Seperating the two can ruin the optimisation meant to be delivered, because the context is no longer front and center. Allowing more quibling to degrade the changes.
Agree. Another item here that is contextual: what is the cost of a bug? Does it cost millions, do we find that out immediately, or does it take months? Or does it not really matter, and when we’ll find the big it will be cheap? The OP joining a new company might not have the context that existing employees have about why we’re being cautious/clear about what we’re changing as opposed to smuggling in refactors in the same PR as a feature change.
I’m going to be the guy that is asking for a refactor to be in a separate commit/PR from the feature and clearly marked.
It doesn’t justify everything else he mentioned (especially the comments piece) but once you get used to this it doesn’t need to extend timelines.
> This is planning issue, if you correctly size tickets you aren’t going to end up in messy situations as often.
I think the underlying issue is what is an appropriate “unit of work”. Parent commenter may want to ship a complete/entire feature in one MR. Ticketing obsessed people will have some other metric. Merge process may be broken in this aspect. I would rather explain to reviewer to bring them up to speed on the changes to make their cognitive load easier
This. The solution to long and multiple reviews to MR is single pair review session where most of the big picture aspects can be addressed immediately and verbally discussed and challenged.
IMHO it is the same as chat. If talking about an issue over mail or chat takes more than 3-5 messages, trigger a call to solve it face to face.
> The documentation comments gets nitpicked to death with mostly useless comments about not having periods at the ends of lines
> End up having to explain every little detail throughout the function
For these cases I like to use the ‘suggest an edit’ feature on gitlab/github. Can have the change queued up in the comments and batch commit together, and takes almost no additional time/effort for the author. I typically add these suggestion comments and give an approve at the same time for small nitpicks, so no slow down in the PR process.
I still want to let the author have the final say on if they decide to accept or reject the change, or modify it further. Editing the branch directly might cause some rebasing/merge conflicts if they’re addressing other peoples comments too, so I don't typically edit their working branch directly unless they ask me to.
I am trying my best to build in an inordinate amount of upfront linting and automated checks just to avoid such things - and then I still need to do a roadshow, or lots of explanations- but that’s probably good.
But the good idea is to say “we all have the same brutal linting standards (including full stops in docs!) - so hopefully the human linger will actually start reading the code for what it is, not what it says”
I'm also a fan of linting everything. Custom linter rules ftw.
This and documenting non-lintable standards so that people are on the same page ("we do controllers like this").
This is how I like to build and run my teams. This makes juniors so much more confident because they can ship stuff from the get go without going through a lengthy nitpicky brutal review process. And more senior devs need to actually look at code and business rules rather than nitpicking silly shit.
> This makes juniors so much more confident because they can ship stuff from the get go without going through a lengthy nitpicky brutal review process.
I had not considered that linters could greatly help new developers in this way, especially if you make it a one-button linting process for all established development environments.
Thanks for the insight! I will use this for the future.
there is huge incentive for people who don't know how to code/create/do-stuff to slow things down like this b/c it allows them many years of runway at the company.
they are almost always cloaked in virtue signals.
almost every established company you join will already have had this process going for a long time.
doing stuff successfully at such a company is dangerous to the hierarchy and incurs an immune response to shut down or ostracize the doing-of-stuff successfully so the only way to survive or climb is to do stuff unsuccessfully (so they look good)
Indeed, cognitive load is not the only thing that matters. Non-cognitive toil is also a problem and often enough it doesn't get sufficient attention even when things get really bad.
We do need better code review tools though. We also need to approach that process as a mechanism of effectively building good shared understanding about the (new) code, not just "code review".
As a reviewer I've seen numerous examples of PRs that were basically out of sync with the rest of the project, did not solve the problem they were supposed to solve, or added buggy or unmaintainable code.
Arguments like "but it works in majority of cases" are a way to delegate fixing issues to somebody else later. Unless noone will be using that code at all, in which case it should not be merged either.
I’m 15 years in and I feel basically the same. I end up making a feature or change, then going back and trying to split it into chunks that are digestible to my colleagues. I’ve got thousands of lines of staged changes that I’m waiting to drip out to people at a digestible pace.
I yearn for the early stage startup where every commit is a big change and my colleagues are used to reviewing this, and I can execute at my actual pace.
It’s really changed the way I think about software in general, I’ve come around to Rich Hickey’s radically simple language Clojure, because types bloat the refactors I’m doing.
I’d love to have more of you where I work, is there some way I can see your work and send some job descriptions and see if you’re interested?
> I end up making a feature or change, then going back and trying to split it into chunks that are digestible to my colleagues.
If you are doing this AFTER you've written the code, it is probably way easier to do it as you go. It's one thing if you have no idea what the code will look like from the beginning -- just go ahead and open the big PR and EXPLAIN WHY. I know that I'm more than happy to review a big PR if I understand why it has to be big.
I will be annoyed if I see a PR that is a mix of refactoring, bug fixes, and new features. You can (and should) have done those all as separate PRs (and tickets). If you need to refactor something, refactor it, and open a PR. It doesn't take that long and there's no need to wait until your huge PR is ready.
Solving creative problems is often iterative, and one things I'm very concerned about when doing engineering management is maintaining momentum and flow. Looking at latency hierarchies is a really good example, you have registers, then cache, then memory, SSD, network etc. and consulting with another human asynchronously is like sending a message to Jupiter (in the best case).
So, with an iterative process, the more times you introduce (at best) hour long delays, you end up sitting on your arse twiddling your thumbs doing nothing, until the response comes back.
The concept of making PRs as you go fails to capture one of the aspects of low-latency problem solving, which is that you catch a problem, you correct it and you revise it locally, without exiting that loop. Which is problematic because not only have you put yourself in a situation where you're waiting for a response, but you've stopped half-way through an unfinished idea.
This comes back to 'is it done', a gut feel that it's an appropriate time to break the loop and incur the latency cost, which for every developer will be different and is something that I have grown to deeply trust and and adjust to for everybody I work with.
What I'm getting at is the iterative problem solving process often can't be neatly dissected into discrete units while it's happening, and after we've reached the 'doneness' point it takes much more work to undo part of your work and re-do it than it took to do originally, so not only do you have the async overhead of every interaction, but you have the cognitive burden of untangling what was previously a cohesive unit of thought - which again is another big time killer
What I mean is, you make your commit, cherry pick it over to the main branch, and open a draft pr. It doesn't break your flow, it doesn't stop anything, and is pretty quick. It also gives you a quick gut-check to see the PR; if you think your team members won't understand "why" it needs to be refactored, then you have one of two problems:
1. your refactoring is probably going in the wrong direction. Team members will be able to help here more than ever. Let them bikeshed, but don't stop working on your main refactor yet. Revist later and integrate their changes.
2. the PR is too small. it will have to be part of a larger PR.
In my experience, people tend to have the first problem, and not the second one, but they think they have the second one. There are many of these "massive refactoring" PRs I've reviewed over the last 20 years where the refactoring makes the code worse, overall. Why? Because refactoring towards a goal (implementing a feature, fixing a bug, etc.) doesn't have the goal refactoring should have: improving code maintainability. So, the refactored code is usually LESS maintainable, but it does what they wanted.
If you make refactor PRs as you go, do you end up merging redactors towards a dead end and then--once you realize it's a dead end--merging even more refractors in the other direction?
I usually wait until I have the big PR done and then merge redactors towards it because then at least I know the road I'm paving has a workable destination.
This is why I design the heckin' huge change at the start, and then cherry pick the actual change (and associated tests) into a ton of smaller PRs, including "refactor here", "make this function + tests", "make this class + tests", "integrate the code + tests", and so on, as many times as necessary to have testable and reviewable units of code.
If I went about and made a ton of changes that all went into dead ends, honestly, I would get pretty demoralized and I think my team would get annoyed, especially if I then went through and rolled back many of those changes as not ending up being necessary.
These same people also want to see your GitHub history filled with deep green come review time. I start to wonder if they think high levels of GitHub activity is a proxy of performance or if it’s a proxy of plying the game the way they insist you play.
This is a big problem with reviews where the author is capitulating because they, with gritted teeth, acknowledge it's the only way to get the desired result (jumping over a hurdle).
So you blindly accept an ill-informed suggestion because that's the only way you can complete the process.
Aye. Sign of the times. You're 20+ years in, so I'm preaching to the choir and old-man-yelling-at-cloud here.
Cargo culting + AI are the culprits. Sucks to say, but engineering is going downhill fast. First wave of the shitularity. Architects? Naw, prompt engineers. Barf. Why write good code when a glorified chatbot could do it shittier and faster?
Sign of our times. Cardboard cutout code rather than stonemasonry. Shrinkflation of thought.
Peep this purified downvote fuel:
Everything is bad because everyone is lazy and cargo cults. Web specifically. Full-stop. AI sucks at coding and is making things recursively worse in the long run. LLMs are nothing more than recursive echo chambers of copypasta code that doesn't keep up with API flux.
A great example of this is the original PHP docs, which so, so many of us copypasta'd from, leading to an untold amount of SQL injections. Oopsies.
Simalarily and hunting for downvotes, React is a templating framework that is useful but does not even meet its original value proposition, which is state management in UI. Hilariously tragic. See: original example of message desync state issue on FB. Unsolved for years by the purported solution.
The NoSQL flash is another tragic comedy. Rebuilding the wheel when there is a faster, better wheel already carefully made. Postgres with JSONB.
GraphQL is another example of Stuff We Don't Need But Use Because People Say It's Good. Devs: you don't need it. Just write a query.
-
You mention a hugely important KPI in code. How many files, tools, commands, etc must I touch to do the simplest thing? Did something take me a day when it should have taken 30s? This is rife today, we should all pay attention. Pad left.
Look no further than hooks and contexts in React land for an example. Flawed to begin with, simply because "class is a yucky keyword". I keep seeing this in "fast moving" startups: the diaspora of business logic spread through a codebase, when simplicity and unity is key, which you touch on. Absolute waste of electricity and runway, all thanks to opiniation.
Burnt runways abound. Sometimes I can't help but think engineering needs a turn it off and then on again moment in safe mode without fads and chatbots.
> Everything is bad because everyone is lazy and cargo cults.
It’s an interesting series of events that led to this (personal theory). Brilliant people who deeply understood fundamentals built abstractions because they were lazy, in a good way. Some people adopted those abstractions without fully comprehending what was being hidden, and some of those people built additional abstractions. Eventually, you wind up with people building solutions to problems which wouldn’t exist if, generations above, the original problem had been better understood.
The road is paved with good intentions, it's not they were lazy but they had intent to distill wisdom to save time. Then yes, the abstractions were adopted without fully comprehended what was hidden, and those people then naively built additional layers of abstractions.
So yes, if the original problem had been better understood, then you wouldn't have a generation of React programmers doing retarded things.
Having watched many junior developers tackle different problems with various frameworks, I have to say React is conducive to brainrot by default. Only after going through a fundamentals-first approach do you not end up with one kind of spaghetti, but you end up with another kind because it's fundamentally engineered towards producing spaghetti code unless you constantly fight the inertia of spaghettification.
It's like teaching kids about `GOTO`... That is, IMO, the essence of React.
> it's not they were lazy but they had intent to distill wisdom to save time.
Yes – I was referring to lazy in the sense of the apocryphal quote from Bill Gates:
“I choose a lazy person to do a hard job, because a lazy person will find an easy way to do it.”
> Only after going through a fundamentals-first approach do you not end up with one kind of spaghetti, but you end up with another kind because it's fundamentally engineered towards producing spaghetti code unless you constantly fight the inertia of spaghettification.
I’ve been guilty of this. Thinking that a given abstraction is unnecessary and overly-complicated, building my own minimal abstraction for my use case, and then slowly creating spaghetti as I account for more and more edge cases.
This sounds more like a case where you need a “break-the-glass” like procedure where some checks don’t apply. Or the checks should be non blocking anyway.
I've had a similar experience several times over the years. Even at companies with no working product that ostensibly wanted to 'move fast and break things'. And I do the same thing; quit and move on. I'm pretty convinced people like that more-or-less can't be reasoned with.
My question is .. is this getting more common as time goes on, or do I just feel like it is..
Valid point, it’s even mandatory in this case. Sometimes people do it for the sake of it. Maybe because there nothing else to make them feel important ?
In critical systems I hope it’s the case though
You always need to look at the track record of the team. If they were not producing solid consistent results before you joined them, it's a very good indicator that something's fishy. All that "they are working on something else that we can't tell you" is BS.
If they were, and you were the only one treated like that, hiring you was a decision forced upon the team, so they got rid of you in a rather efficient way.
That's rough. Of course some amount of thoughtfulness towards "smallest reasonable change" is valuable, but if you're not shipping then something is wrong.
As for the "comments on every detail" thing... I would fight that until I win or have to leave. What a completely asinine practice to leave comments on typical lines of code.
I like to call these smells, not rules. They're an indication that something might be wrong because you've repeated code, or because your method is too long, or because you have too many parameters. But it might also be a false positive because in this instance it was acceptable to repeat code or have a long method or have many parameters.
Sometimes food smells because it turned bad, and sometimes it's smelly because it's cheese.
It's the same with writing. The best authors occasionally break the rules of grammar and spelling in order to achieve a specific effect. But you have to learn the rules first, and break them only intentionally rather than accidentally. Otherwise your writing ends up as sloppy crap.
(Of course some organizations have coding conventions that are just stupid, but that's a separate issue.)
Same deal with DRY, the principle is obviously correct but people can take it too literally. It's so easy to get yourself in a huge mess trying to extract out two or three bits of code that look pretty similar but aren't really used in the same context.
The problem with DRY and generic rules around size, etc. really seems to be figuring out the boundaries, and that's tough to get right, even for experienced devs, plus very contextual. If you need to open up a dozen files to make a small change you're overwhelmed, but then if you need to wade through a big function or change code in 2 places you're just as frustrated.
Hard rules are the problem. There is a lot of "it depends."
After over 40 years of programming, I continue to reduce the size of functions and find it easier to write and understand when I return to them. Ten lines are now a personal guideline.
However, a linear function with only tiny loops or conditionals can be easily understood when hundreds of lines are long, but not so much with nested conditionals and loops, where there is natural decomposition into functions.
I observed that the same guidelines became rules problems when test coverage became popular. They soon became metrics rather than tools to think about code and tests. People became reluctant to add sanity check code for things that could should never happen because it brought down code coverage.
There are certainly functions written too cleverly to be apparent how they manage to work at all in a few lines. By my own hand six months ago sometimes. The solution is an unsexy one but always works: write a books worth of comments near that function code that explains absolutely everything and why it was done.
>It almost always comes from a lead or manager who reads too many business books and then cargo cults those books on to the team.
Worse, they behave as though they have profound insights, and put themselves on an intellectually elevated pedestal, which the rest of their ordinary team mortals cannot achieve.
I've seen a book promoting the idea that methods should not be longer than 5 lines.
Of course now I know these ridiculous statements are from people hardly wrote any code in their lives, but if I'd read them at 18 I would have been totally misled.
Weirdly if you do break everything down into purely functional components it's entirely possible to uncompromisingly make every concept a few lines of code at most, and you will end up with some extremely elegant solutions this way.
You wouldn't be misled at all, only that the path you'd go down is an entirely different one to what you expected it to be.
I disagree that it's an attack, I've also never heard anyone say methods should be less than 5 lines. 5 lines is an insane limit, 15 is much more reasonable. This kind of enforcement reeks to me of unnecessarily "one-lining" complicated statements into completely unreadable garbage. I mean seriously though, 5 lines? Why not 4, or 3, or 6? 15 lines of well thought out code is infinitely preferable to 3 different 5-line monstrosities. Who(m'st've) among us that actually writes code would preach such a guideline, and can i please see their code for reference. Maybe they are just better than us, i still don't think that makes it a reasonable general rule. And i disagree that calling that out as crazy counts as a personal ad-hominem attack against this nebulous entity
Doing things the right way always introduces a shackle to your ankle. Oh am I to package my functions as discrete packages I call via library name carefully crafted to install into some specific folder structure that I now have to learn and not make mistakes with. Or I can do it “improperly” and just write a function and start using it immediately.
Not everything has to be designed like some standardized mass produced part ready to drop into anything made in the last 40 years. And what is crazy is that even things written to that standard aren’t even compatible and might have very specific dependencies themselves.
If a function is longer than what I can display on a single screen, it better has to be argumented with very exceptional relevant requirements, which is just as straight forward to judge for anyone with a bit of experience.
In my experience it usually devs that do that to themselves after reading stuff on the internet and thinking “I want to be a professional and I want to show it to everyone.”
Then rules stay and new people just continue with same silly rules instead of thinking if those are really that useful.
These hard rules may be useful when trying to instill good habits in juniors, but they become counterproductive when you start constraining experienced developers with arbitrary limits.
It’s really bad when you join a team that enforces rules like this. It almost always comes from a lead or manager who reads too many business books and then cargo cults those books on to the team.