Generally I'm of the opinion that git hooks are not the right place to place linters or formatters.
For a JVM language, it means that you'll spin up a JVM for each step (and you can well have 15 formatters + linters in your stack).
So, the formatter better lives in a dev-only part of your codebase, so it's always loaded, interacting with your codebase using _runtime insights_ otherwise impossible to gather, and it's easily hackable.
I've authored this kind of solution in the past. It works great, and a proper CI pipeline (i.e. including linters) kills the other half of the problem.
Love these kinds of tools. What is different between husky / git-staged and this?
It’s faster and smaller?
Quite happy with husky and running tslint/tests with husky on precommit currently. Curious what the biggest benefits are from left hook and if it’s a pain to switch to if they are amazing benefits.
The biggest annoyance with current setup is on prem ci with Jenkins and GitHub Enterprise and complex docker bash scripts embedded in Jenkins. It’s a pain to get lint/test results to show up in the GitHub pull request ui. Doesn’t seem like this tool would help make that use case faster to set up.
> Husky + lint-staged add roughly fifteen hundred dependencies to your node_modules
What on earth?
I used to work on a backend and sometimes had to test the front-end. The webpack dev server required 2 gb of ram to boot. Sure we all have the latest MBP amiright, but if you have a backend stack to up as well, the OOM killer is right around the corner..
This seems so backwards to me. Why not do it as part of your CI process where you're not foisting unnecessary tooling on your developers local machines? The lag involved in spinning up and running the commit hooks is noticeable and disruptive especially if you commit often and rebase locally.
Having to wait for your CI to tell you that you've done something wrong takes a lot longer than a pre-commit/pre-push hook.
I agree that running expensive operation on commit doesn't make a lot of sense (temp commits and whatnot), but it's nice to bail out quickly before pushing.
the article talks about pre-commit hook, but nobody stops you from running these checks as pre-push hook. and nobody says you need to run the all the checks that you'd normally run on CI, but just enough checks to get past the silly errors.
for small repos, running linters and such on pre-push is great because you definitely don't have unlimited CI agents, so a lag of 10 seconds for a "git push" might mean you actually gain 30 minutes and not wait for an agent to be available and you also save your team some good minutes with a silly CI job failing with "missing whitespace".
PS: if you want to skip the pre-push hook once in a while, just run "git push --no-verify"
In a real emergency situation, there are all sorts of solutions to get a fix out and fast. But that almost never happens, thus it's not something to optimize for.
That may also depend on how many commits wait in the line. If team is large or monorepo, you need to wait for all pushes to pass in order for your own push to be tested which might take a long time even if single test is fairly fast.
I generally do it on both stages. I run pre-commit (the Python project) locally on just the files that changed, which catches 99% of problems, reformats things with Black, etc. Then, I run pre-commit again on CI, but this time it goes through all files, which catches any problems that might have fallen through initially.
This is both very fast locally and very accurate. Plus, a dev that didn't bother to install pre-commit won't ruin the entire codebase, they will instead have to go through lengthy fix/push/wait/refix cycles, so they're incentivized to install pre-commit locally to run the linters quickly.
I disagree. You don't need to run the commit-hook on all files, just on the files you changed. It shouldn't take more than to write the commit message (and I wonder if there is a way to run the git hook right when you type the commit message). And you still should do the same checks for all files in your CI.
The other way - committing, pushing, waiting on the CI, just to see the pipeline fail when all you want is to get a hotfix out to the server... it would make me furious.
We have two projects. In one, we have no hooks and just have CI fail if the tests fail or you forgot to lint. The other one has a commit hook to lint and a unit test hook on push.
The one without hooks is so much better in my opinion. Sure, once in a while you forget to run the linter and the CI fails, but its worth it if the alternative is to have every git commit you ever make take 5 seconds and every git push take 30 seconds (our test suite is slow).
One example of the use case, which the article hits on, is open source. Most open source projects are relying on free CI resources, tying them up doing linting is a waste and can be rather annoying when you're doing important core work but find there is a deep queue of CI jobs full of stuff that is likely to fail linting or other basic checks.
Would be nice to have a tl;Dr for us folks on mobile - I enjoy this kind of stuff very much but for some reason it felt like a drag getting to the point
Would be great to have seen some code examples of before/ after straight away perhaps
Product seems neat I will look into giving it a go on my side projects
I think this could be best described as TL;DR generation.
I guess article could be more to the point but again, there is freedom of speech and stuff, we don't have all to comfort to the same paradigm so it hardly deserves a rant.
I skimmed the post trying to figure out what it does and the only hint I got was when it mentioned being faster than localhost. I came to the HN comments to see if anyone would mention what it's for.
Seeing your comment, I decided to read through it more carefully. I read the first 14 paragraphs several times trying to figure out what lefthook does. It wasn't until after writing most of this reply that I realized your post went on longer than that, as the picture right before round one looks like the start of a comment thread.
Certainly I should have some idea of what your product does that far in to the article.
I was introduced to an important rule re. giving presentations when I was still at school:
1. Tell them what you're going to tell them.
2. Then tell them.
3. Then tell them what you told them.
Open with a clear introduction, no more than a few sentences, which covers everything at a high level and sets the scene. Then go through points in a logical order in as much detail as necessary. Finally, summarise as necessary to tie together all the points and link them back to the introduction.
If someone has to read six paragraphs just to work out what the hell you're selling then you're doing it wrong. But that will be your cost, not theirs.
> I miss those beautiful days when people could read 6 paragraphs of text without getting bored
> Not snarky at all!
This is clearly snarky; you're putting someone down by asserting they don't have the attention span to read six paragraphs and that they'd understand if they did. In reality, I read the whole thing and didn't actually get the point until the very end, which has been echoed by more than a few comments here. When offered constructive criticism to improve the presentation of a project, it's been my experience that poking at the commenter in a condescending manner is a great way to drive off potential users. Whether you intended it this way or not (I believe you did, but mostly because I myself have to work hard from being defensively aggressive) is moot- this is how it was received by each of us who have commented on this specific thread, so be aware of this in the future.
Serious question: why would I want to use this instead of tests as part of the CI process? Or would the use case be to use both but just get faster feedback from Lefthook?
You should use both. Basically it's bad idea to push broken code to remote, but running tests and linters on all files is too time consuming.
So you set up lefthook to run your tests and liters only on changed files (which is fast and prevents 90% of problems), but then your code is pushed you still run CI checks on all code to make sure that that some dependency in unchanged files is not broken.
> Basically it's bad idea to push broken code to remote
This argument is void if CI is set to allow complete testing on branches along with parallel pipelines. So I want to do something, I branch, code, and push. Server does all the funky stuff without me having to install or understand anything which is huge time saver. With parallel pipelines you do not even block others with this behavior.
Things not on trunk can be broken, that is exactly one of the points we have branches.
I agree that the docs for pre-commit are hard to follow and badly organised.
The other points I'm not so sure. I don't find it slow, or find the config verbose. Nor did it break output from commands in my experience (though I did have an issue where colorised output got disabled in a tool because it couldn't detect if terminal colors were supported)
Your page is so long and so bad that I left it with the same assumption. Why can't you just have a paragraph explaining what it is, why it's useful and what it supports.
If I don't use any of the existing hook witchcraft, why do I have to go over paragraphs and paragraphs of comparisons of things I am not familiar with?
It's 95% there, but English is hard… We can be helpful about it so the authors can fix it.
¶1: Meet Lefthook, the fastest polyglot Git hooks manager out there. It ensures not a single line of unruly code makes it into production. See how easy it is to install Leftook, which was recently adopted by Discourse, Logux, and Openstax. Lefthook works with the most common front-end and back-end environments so all the developers on your team can rely on a single, flexible tool. And it also has emojis
¶2: The days when a single piece of software that millions rely on was created by a single developer in an ivory tower are long gone. Even Git, universally believed to be the brainchild of Linus Torvalds alone, was created with the help of contributors and is now being maintained by a team of dozens.
¶3: No matter if you work on an open-source project with the whole world as your oyster, or your code blooms in a walled garden of proprietary software—you still work as a team. And even with a well-organized system of pull requests and code reviews, maintaining the code quality across the large codebase with dozens of contributors is not an easy task.
¶4: Hooks—ways to fire off custom scripts when certain important actions (commit, push, etc.) occur—are baked right into Git, so if you are comfortable with Bash and the internals of the world's most popular version control system, you don’t need any external tools: just edit ./.git/hooks/pre-commit and put in some well-formed script that will, for instance, lint your files before you commit.
¶5: However, when you work on a project you are most interested in writing the project’s code, not the code that checks it. In the world of modern web development, tooling is everything, and a myriad of tools exist for a single reason: reducing overhead and complexity. Git hooks are no exception: in JavaScript community, the weapon of choice is Husky with Webpack, Babel, and create-react-app relying on this Node-based tool; the Rails-centric backend world however is mostly ruled by Overcommit that comes as a Ruby gem.
¶6: Both tools are excellent in their regard, but in a mixed team of front-end and back-end developers, as Evil Martians are, you will often end up having two separate setups for Ruby and JavaScript with front-enders and back-enders linting their commits each in their way.
¶7: With Lefthook, you don’t need to think twice—it’s a single Go binary that has wrappers both for JavaScript and for Ruby. It can also be used as a standalone tool for any other environment.
¶8: Using Go makes Lefthook lightning-fast and provides support for concurrently executed scripts out of the box. The fact that the executable is a single machine-code binary also removes the need for handling external dependencies. (Husky + lint-staged add roughly 1500 dependencies to your node_modules.) It also removes the headache of reinstalling dependencies after each update of your development environment. (Try running a globally-installed Ruby gem with another version of Ruby!)
¶9: With Lefthook added in either package.json or Gemfile and a lefthook.yml configured in the project’s root (see examples below), the tool will be installed and used against your code automatically on the next git pull, yarn install/bundle install or git add/git commit. All with zero overhead for new contributors.
¶10: An extensive README describes all the possible usage scenarios for Lefthook. Its straightforward configuration syntax does not hide actual commands being run by Lefthook, making sure there's no funny business going on.
¶11: Discourse—an incredibly popular open-source platform for forum-style discussions—has recently transitioned from Overcommit to Lefthook and never looked back. With almost 700 contributors authoring 34K commits and counting, running linters on all new contributions is a priority. With Overcommit though, team members constantly had to remind newcomers to install required tools.
¶12: Now that @arkweid/lefthook is a dev dependency in the project’s package.json, no setup is necessary for new contributors.
§1: Lefthook halves the time that pre-commit scripts take on localhost.
¶13: The PR that changed the Git hook manager simply required changing .overcommit.yml to lefthook.yml. If you compare them, you will see that Lefthook’s configuration is much more explicit while the Overcommit’s relies mostly on the magic of plugins.
¶14: Besides changing the way the output looks, Lefthook offers a nice summary of everything it does. Lefthook halves the time that pre-commit scripts take on localhost, and increases the CI run speed by 20% (on CI environments with better support for parallel execution, the gain can be considerably more).
Well, if they're trying to get you to switch or to adopt something new, they're trying to sell. There's more to trading than just money for product/service; could be attention, effort, endorsement, opinion and other signalling factors, etc.
By your definition, technically, any piece of content on the Web should be considered a sales page—including your comment, where you'se selling your opinion or attention. If that's the case, does it really make sense to label something a "sales page" in a negative way?
Is there any other way to present a new FOSS project besides telling about the upsides? Trashtalk it from the start, maybe? I don't really get it.
Yep, that's kind of the point of upvotes or karma: You present your content and try to get people to buy it. Not sure where you get the negative sales page aspect though. I made no judgement on the presentation technique or quality, just that open source or not, they are still selling something.
For a JVM language, it means that you'll spin up a JVM for each step (and you can well have 15 formatters + linters in your stack).
So, the formatter better lives in a dev-only part of your codebase, so it's always loaded, interacting with your codebase using _runtime insights_ otherwise impossible to gather, and it's easily hackable.
I've authored this kind of solution in the past. It works great, and a proper CI pipeline (i.e. including linters) kills the other half of the problem.