Hacker News new | past | comments | ask | show | jobs | submit login
Farm: Fast vite compatible build tool written in Rust (farmfe.org)
238 points by rk06 3 months ago | hide | past | favorite | 157 comments



So, I decided to try this for a real-world project and compare the results. This is for a full production build with no cache, taking the best of three runs each.

Vite:

    2166 modules transformed.
    built in 28.21s
Farm:

    [ Farm ] Build completed in 13350ms FULL EXTREME! Resources emitted to ./dist.
    [ Farm ] Public directory resources copied successfully.
So that is 28.21s vs. 13.35s, or a 53% decrease in build times. This is certainly neat, but I'm not sure whether it is worth the dependency on a new player.


A halving of build time, for even a medium volume project, could be a big cost reduction. If those builds were much longer than 30sec, compounding through a CI pipeline with many changes, leads to much shorter cycle times, faster time to prod, faster to find and fix defects, which translates to much more than just infra cost reduction benefits. I created a visualization once, of the release pipeline for a big project, showing the feedback loops that exist and the compounding effects of small improvements like that. It was eye opening for me, and everybody.


Build time isn’t the only benefit, but consider that the author is from ByteDance and how large their projects are.

Another important callout is how in Vite, native ESM modules are used in dev which makes it very slow for large apps with thousands of modules. So cold start of dev can be important too.

In practice HMR is mostly what you need and Vite has it covered. But slow tends to accumulate slow.


I kinda prefer how vite handles things in dev by not bundling files. I work remote to my server, and I'd much rather download the few changed files vs a multi-deca-megabyte file.


How can a project have 2166 modules? Sounds more like a problem in software architecture than in tooling.


Don’t know what to tell you. This is an actual web application that has seen four years of development, a migration from vanilla Vue.js 2, to Nuxt, then an integration of Vuetify 2 (which is about the worst library I have ever had the misfortune of working with, and has a giant amount of runtime dependencies).

I wouldn’t say it’s too many modules. We heavily use code splitting, so most of the time, users will fetch only a few of these, and pages load quickly. Plus, there’s some dependencies we cannot get rid of that are quite heavy.

From my personal experience though, it’s nothing unusual really.


Do you have an idea of what kind of functionality all of those deps provide? It would be interesting to see a breakdown of where dep bloat typically comes from.


Yes, I just cannot share the flame graph publicly, but I can describe parts of it. A big chunk is taken up by the Mapbox GL SDK (Vector maps, 35%), Sentry (Telemetry and monitoring, 14%), and Vuetify (component framework, 10%). The rest are small dependencies and application code.


I'm happy I never did that planned Vuetify rewrite of my Vue 2 app, dodged a bullet there.


Sure did. We're completely locked in because Vuetify encourages you to use their components for everything from layouts to buttons, their styles are written in sass and depend on an outdated version of dart-sass, and use a half-assed version of Tailwind utility classes that creates conflicts on every level. And since they rewrote the framework completely for Vue 3, and took special care to make sure that every single darn prop is different, a migration is pretty much impossible without touching absolutely everything.

The only way out for us will be to get rid of vuetify components, one at a time, and introduce Tailwind with a prefix, until we can finally remove that mess.

The biggest recommendation I have is to stay away as far as you can from that library. I think that has been the worst technical decision I ever made at my job.


> Vue.js 2, to Nuxt, then an integration of Vuetify 2 (which is about the worst library I have ever had the misfortune of working with, and has a giant amount of runtime dependencies).

This is my experience with Vue overall, not just it's shitty ecosystem and libraries.


Heh... I'm just helping out on a TS project with 50k files/modules (over 1.5M lines), not counting the 200+ external dependencies (or their subdependencies).


That's not particularly high for a large scale web application where every file is a ES module, especially so if it includes dependencies.


I have mixed feelings about this project. It appears the author primarily aimed to rewrite Vite in Rust and then found justifications for doing so. From my experience, Vite is already sufficiently fast for most needs. Adopting a new and potentially unstable project for marginal performance improvements doesn't seem justifiable.


Another way to view it: slow builds is a poor man’s feature, that punishes you for bringing in too many deps.

The JS ecosystem (speaking as an omnivore using many languages) suffers two big problems:

1. Tooling is too fragmented and complex and poorly designed. This includes even the standard tools and formats like npm and package.json.

2. There’s a weak culture of API stability. Leave a project for 3 months and there are new config files, plugin formats, etc.

It’s improving slowly through standardization with things like ES modules. But it’s still a Wild West.


That's my main gripe about JS ecosystem. Most of the times, I only need to compile a project (either live or as a static asset). You run "npm install" and the build tool (without any project dependencies) already come with 200 folders inside node_modules. And those byte-sized libraries! It always feels like an hackathon scene.


The small core approach is way too unwieldy and the cruft only seems to be piling on further. There are constant efforts to fragment the ecosystem (like for example currently Deno and Bun, exciting as those projects may be on their own merit), while there is seemingly no interest anywhere in creating a standard library to solve the issue of those frightening dependency trees.


I tried building a Rust project and it failed because it consumed my entire remaining 8 GB of disk space... Good old NPM.

I tried running a Python project and the necessary apt-get pulled 2 GB of dependencies and conflicted with some of software I use, that's before running the pip install... Good old NPM.


There are two issues: dep bloat and “hermeticity” if you will. Cargo is quite nice and projects build in a straight forward way. However, the Rust ecosystem sucks in terms of dep bloat, imo. And the compiler artifacts are large. Python not sure, but I thought you had a way to get hermetic environments?

Go stands out as being fast, small and very limited dep bloat, thanks to a comprehensive standard library. And very minimal config files. By that standard npm is a shit show.


Python is a mess with dependencies.


it is. just because you pushed a hack to prodb doesn't mean it stops being a hack.

but that's all that younger devs know. specially people from boot camps or the CS to startup pipeline.


Because apparently the right size of a package is one function. /s


We could have locked down all the libraries, APIs, and tools to what they were in 1999, but then people would complain about that.

:p

But really, you can just use skip it all and write your own code. People would scoff now, but that’s what we did. Well, maybe with prototype and/or jQuery.


I miss slow builds. Take a walk. Grab a coffee. See how your colleagues are getting on.


Rollup is slow as hell. I’m working with a project where the builds take minutes on an M3 Pro.

I’d like to switch to something like esbuild but it uses a lot of features like aliases so would take some work to translate and the lib is being moved away from anyway. If I can drop something in that speeds it up I’m all for it


The vite authors are working on a roll up rewrite in rust to solve this. So instead of using farm, I'd just wait.


Is this rolldown or something else?

https://github.com/rolldown/rolldown


Why not try farm though if it means faster builds now?


I compile a few hundred thousand lines of c++ in “minutes”… it’s sad to see the state of the ecosystem be actually slower than languages that have a reputation for being slow.


Ok, but C++ is a compiled language. Even though compilation speed doesn’t need to be that slow, at least you’re getting something out of it.

JS is interpreted and JITed. The whole point of using it is having a very fast feedback loop.

The tradeoff should be: faster at dev, slower at runtime. There’s something fundamentally wrong if that premise is broken.

Actually it’s insane that we need build steps with any noticeable delay during dev time at all.


Indeed.


you can try use farm,If you are using a relatively simple rollup plugin, there should be no problem. The farm is compatible with rollup plugins.


If you find that your Vite build is too slow, you should reduce the size of your codebase.


I’ll get right on that. Let me call up my product manager and CEO and tell them to cancel our upcoming enterprise contracts. The codebase is too big, we can’t add any more features.

After that we can start laying off the devs and firing our biggest customers. The features they use are taking up too much code!


No, no, no, just implement microfrontends obviously, with 2-pizza-slice sized teams. This way you can improve all the KPIs that matter: builds become MUCH FASTER, and direct reports become MANY MORE. No manager can say no to this amazing architecture!


Serious question: does anyone here work for a BigCo who uses Chinese software like this in production? Do your InfoSec teams sign off on it?

What's the risk of downloading what's supposed to be a release binary of this off GitHub, where the release binary includes something nefarious not found in the open codebase, that steals your company's source code when you use it to compile?

In today's geopolitical climate, how does anyone trust non-Western projects?


You can't trust "western" open source projects as well. It wouldn't be hard to just not mention China anywhere on the website or the sources so that most people would never know.


That's what the Chinese would say. West doesn't need to insert vulnerabilities for the basic stuff since they can already front/backdoor with GCP/AWS/alike.


Well this one specifically is listed under a fake company name which raises further questions. Is this just another attempt to conceal chinese spyware? See the discussion at https://news.ycombinator.com/item?id=40756408


Their code base is also weirdly convoluted, with many crates and files just proxying over to others. I've read and written a lot rust, and haven't seen this style really. Could be nothing but its odd.


I would consider western and non western equally bad.


So, we have rspack, turbopack and now Farm, all written in Rust.

There is also Vite, of course , which is already quite fast and very popular.

What's the differentiator between those?


Don't forget Bun (*) ... [1]. Bun's bundler is appealing since Bun packages a bunch of other tools you already need (test runner, runtime, fast npm replacement).

Bun + Biome covers everything you need for a TS project (Biome lints and formats).

--

* : I guess it is not written in Rust but still in the same category of tools.

1: https://bun.sh/docs/bundler


there is a small difference: you also need to make your own hmr reloading script when using bun. But it does so much more than just frontend bundleing.

If you need an hmr script you can use the one I wrote for my webframework: https://github.com/spirobel/mininext

(my goal with mininext is to provide index.php like productivity but with all of npm and typescript at your fingertips)


I may be wrong here but I think https://modern-web.dev/docs/dev-server/overview/ could do server-with-HMR for a Bun .js tree? (unsure how you'd handle .ts there, sorry, still caffeinating)


the goal was to not have any dependencies.

mininext has a clean package.json

It is just 3 files of modest size that are understandable in an afternoon.

You can also copy and paste and vendor the hmr part.

Someone even forked mininext to create micronext.

My goal is to compete with php. So I added some quality of life features. But if people want to go turbo minimalistic I can respect that.

I just wouldn't add a dependency for hmr when it is so easy to do with a small snippet.


Sure. This was wrt "when using bun" - i.e. somebody else might prefer to just whack web-dev-server into place as a lightweight option for them.

mininext is a neat idea and I plan to read through the code (do you have a preferred 'small snippet' for hrm? I'd be interested to read that too) but I was talking more generally in my last comment.


Sorry for the silly question, I found the standardDevReloader stuff, hadn't realised it was built in. Not what I was hoping for since I want per-library reloading, but for the goals of mininext that's probably overkill anyway.


haha no worries :-D

I like to keep it simple. Right now bun just rebuilds everything and it is so fast that I don't recognize it.

If it ever becomes a problem I will optimize it.

The standardDevReloader is a snippet to tell the browser to refresh the page after a rebuild happened.

If you setup a fresh project with the barebones quickstart template with: bun create spirobel/aldi yournewproject

you will see a dev.ts in the project folder. compared to the start.ts (which is used to run the prod version) it will setup this snippet, so it is included in every html page that is served.


And Deno! Which also has compilation tools and is written partly in Rust.


True, although it seems they deprecated their bundle command and even recommend using esbuild or rollup [1] (or an "unstable" module deno_emit).

--

1: https://docs.deno.com/runtime/manual/tools/bundler


Yep that was a bummer for me, I was trying to start a little project where I wanted a bundler lib and started using Deno's... but quickly ran into some limitations and found out they've deprecated it and it's not even maintained anymore :( . I started using esbuild instead via its Typescript lib, but then noticed I was spending a lot of time working around the problems with the JS ecosystem (terrible file system watcher in node.js, lots of underpar libraries and as someone not intimate with every domain, it's nearly impossible to know which library may be well written and maintained without spending lots of time investigating, weird APIs, silent errors, I just can't believe some people can cope with all of this instead of just moving to a saner language).... and as esbuild is written in Go, I rewrote my code in Go and now it's much, much nicer to work on, much faster, can be shipped as a single binary etc.

If there was a bundler written in Rust I might have chosen that, as despite not having anything against Go, the momentum seems to be strongly shifting to Rust (and even in the little Go I did, I already feel the pain of its error handling), and Rust definitely gives you better tools to write reliably applications.


I’m a big fan of Bun, and having it just work.

There are still issues (like aborting a fetch, and TS5 style decorators).

But it’s fast and I have all tests run on every save, which are near instant.


Bun would be cool if it worked. At least it's not written in Rust so there's that.


Neither sentence of your post has any value to anybody, other than (possibly, dubiously) yourself. Next time, consider writing such comments in your personal journal, or perhaps just shouting them out loud when you're alone.


I’ve been using Bun for a year. It works.


I feel like it's a bit weird they didn't mention Rsbuild. Rsbuild is a "nice" wrapper around Rspack and SWC.

It's probably more holistic/batteries included than Farm, from a cursory glance. Rsbuild also aims to be a Vite alternative, but leans more into the webpack philosophy (and ecosystem) than Rollup's


Why aren’t we comparing to ESBuild?


Vite uses ESBuild under the hood.

https://github.com/farm-fe/performance-compare?tab=readme-ov...

The production build metric here would be ESBuild + a small amount of overhead from vite. (The HMR/dev build side of things is handled by Rollup instead with Vite)

IMO, Vite (ESBuild/Rollup) are fast enough. Farm will need to get some time being used + abused by OTHER people's real projects before I'll be considering it in order to save what seems like maybe 2 minutes a day.


> The production build metric here would be ESBuild + a small amount of overhead from vite. (The HMR/dev build side of things is handled by Rollup instead with Vite)

That’s completely backwards. Vite only uses rollup for production, which is why production build is slower. https://vitejs.dev/guide/why.html#why-not-bundle-with-esbuil... The plan is to replace both esbuild and rollup with Rust-based rolldown (fast, and supports rollup’s plugin architecture) eventually. See https://www.youtube.com/watch?v=hrdwQHoAp0M.


I suppose we should wait until all tools move to Rust/Zig before learning modern front-end engineering to bypass the churn phase. Till then stick to plain HTML+CSS.


I've been using webpack for the last 7 years. Major version upgrades can be a pain, but broadly it's fine. People hopping frameworks every 6 months are causing their own problems.


I fully agree with the sentiment - but for me there's also that I could remove 26 (!) dev dependencies and minutes off each build, in a medium-sized app, when replacing Webpack with Vite - so it's worth being in the loop at least to some extent.


Yep, that makes total sense. We're also looking at a Vite migration in the near future, but also making sure that whatever we do, we're leaning more on web standards and common functionality, and avoiding as much bundler specific stuff as possible.

Back in the day is was cute that you could import GLSL fragments directly into your JS code. Now, not so much.


And maybe then, we also get rid of node, and use compiled languages frameworks on the server.


for the basics you can just use vite and nothing will change except what's underneath. it's only when you get fancy that you need to write your own plugins that would really be affected by this churn.


Oops, yep you’re right, I mixed them up.


I don't understand what your need other than esbuild though...

I guess it doesn't support HMR? I guess I just persist the frontend state of my app so I only need livereload...

Just seems like so much wasted time and effort on marginal things. Just use ESBuild and start building something!


Esbuild lacks the featureful plugin API that Webpack and Rollup have. It's pretty easy to avoid dependencies that require transpiler/bundler plugins and still have a great set of dependencies, but it's a deal breaker for many.


A comparison with Rolldown [1], the rewrite-in-rust of Rollup by the Vite team, would be more relevant, although it is still a work in progress.

[1]: https://rolldown.rs/


I love how they compare the loading times by making the bars in the graph render as slowly as their benchmark measurement. Really delivers the message why the faster compilation time is worth it, instead of just showing a static graph with numbers.


Indeed!

They probably took the idea from https://esbuild.github.io


Note in the benchmark, it is comparing a React JSX project using @vitejs/plugin-react (Babel based) instead of @vitejs/plugin-react-swc (SWC based).

I made the exact same point two years ago when Turbopack came up with a similar benchmark: https://github.com/yyx990803/vite-vs-next-turbo-hmr/discussi...

The point is if we want to compare bundler performance, we should keep all the non-architectural variables consistent across all implementations. Otherwise we are not comparing apples to apples.

PR submitted to update the benchmark: https://github.com/farm-fe/performance-compare/pull/11


A WeChat link under "community".

Without wanting to make this specifically anti-chinese, I really wonder whether that makes it better or worse than only having the Discord link.

I find Discord as a community platform already quite questionable. Now add to that the simple fact of fragmenting your community into two... I'm not convinced.


I’ll have to look at this over the weekend. I’m very excited to see improvements in dev UX in the JS ecosystem. I’d have a very hard time adopting this in prod tho, being so new.

Are any large projects using it? Corporation sponsors?


For the current farm, in order to be more compatible with the front-end ecosystem, we have chosen to be compatible with the vite hook and options. But in fact, the biggest performance problem lies in the communication problem between js and rust. For example, if a vite plug-in uses a hook to obtain the entire resource pot, the entire communication will consume more than 10 times the time, so the future path of farm should be if we Avoid these problems and create farm rust plugins


I would love to use it as a test but there is too much of a commitment in that it is not a drop in replacement for vite nor is there anything in the way of a doc or help for "migrate from vite to farm".

I stopped at "create" - I rather "install and convert" in a branch to try stuff out.



That’s not too helpful. If the transition is really as easy as they claim, a simple console utility to generate a farm.config.ts from my vite.config.ts would go a long way.


Farm'll develop a auto migration tool later


Im sure they will, it’s kind of a low hanging fruit. But with the fast pace the JS ecosystem has, network effects are key. So if they want to amass users quickly, making it as easy as possible to switch over could decide whether they can gain enough market share to stay relevant.


Where does money come from in Farm Inc.?


TikTok NPC streams of course


Farm is not a company, `Inc.` is a mistake. Farm is developed by a community team, sponsor welcome!


> This page crashed. > > Error creating WebGL context.

I'm sorry, but WebGL probably shouldnt be a requirement for your front page to display any content at all. I disable it for privacy hardening and switch browsers to opt-in. Additionally, low-power clients may struggle with rendering and sink battery in mobile devices.

The docs work fine, if anyone else runs into this and wants to read about the project.

https://www.farmfe.org/docs/why-farm


Minus the bar charts, the site works fine with JS disabled. If WebGL takes the whole page with it when it crashes, that's kind of on the browser to fix. Having a hard time even seeing what WebGL is used for on that page though: there's a canvas tag in the menubar, but it doesn't seem to be used for anything.


> If WebGL takes the whole page with it when it crashes, that's kind of on the browser to fix.

On the contrary, it's on the Farm team to fix. It seems their front page has only one error boundary, which is why the chart component crashing takes the whole page down. This is fixed by simply wrapping the chart component in its own error boundary. That way React doesn't unmount practically the whole page when that component throws an error.


Oh, I thought you were talking about the actual browser process crashing. But seriously, are you saying that in React, a child component that throws an error during render will kill the parent's render too? That's wacky. I do Vue for a living, which doesn't behave anything like that.

But anyway, the chart looks like regular DOM to me. The canvas is actually in the top navbar, but I don't see anything drawn on it.


Since people in the early days of React would ignore errors thrown by components (or the React library itself) and constantly run into literal undefined behavior, React eventually adopted a behavior where it unmounts the VDOM to catch the developer's attention.[0] When a component throws an error, React will unmount to the nearest error boundary. By default, this is the root of the tree. You can control how much gets unmounted with error boundaries.

I've never used Vue so I have no idea how error handling works there, but React also offers programmatic ways to recover from errors (e.g. error boundaries can retry rendering the failed component with different state or props). Doesn't sound strange to me.

[0] https://legacy.reactjs.org/docs/error-boundaries.html


In Vue, if a component's setup or render functions crash, the component and its children just don't render and the parent is unaffected. It doesn't unmount components that crash later either. I guess that means Vue has an implicit error boundary around every component, and if you want different behavior, you set the errorCaptured hook to do something else.


Yeah, the error message I posted is rendered into the DOM by the web framework.


Is it like a drop in replacement? I’m working with a semi-legacy codebase that takes over a minute to compile and copy with YALC, so if this can squash that time to something more reasonable I’ll be extremely happy.


The scrolling is jumping around like crazy on iOS


Some of the content is also unreachable on Android.


Do I need a different npm package for this on ARM Macs vs my colleagues on Windows boxes?


No, npm packages usually ship with binaries for multiple platforms. Some of them will try and build something dynamically as well.


So the npm package lock file will be the same on all platforms?


Yes


Can someone with expertise in frontend development help me understand the proliferation of similar tools in the field?

What specific optimizations do these tools offer that others can't, and why can't the slower tools simply adopt these optimizations in future updates?

Is this a result of a lack of consensus within the community, leading to the creation of new forked tools, or is there a larger strategic context that I'm missing?


I have recently written a broader exposition on frontend build tooling, perhaps it will be useful: https://sunsetglow.net/posts/frontend-build-systems.html.

The performance gains in the recent past have mostly been due to moving away from single-threaded JavaScript to multi-threaded compiled languages. This requires a complete rewrite, so existing tools rarely take this step. We see this optimization in Farm alongside "partial bundling," which strikes a performance-optimal balance between full bundling (Webpack) and no bundling (Vite) in development.

Vite abstracts over a preconfigured set of "lower-level" frontend build tools consisting of a mixture of older single-threaded JavaScript tools and newer multi-threaded compiled language tools. Vite can adopt the partial bundling of Farm, but dispensing with its remaining JavaScript tools is a major breaking update.


> The performance gains in the recent past have mostly been due to moving away from single-threaded JavaScript to multi-threaded compiled languages.

This is overly simplistic. Parcel had far better performance than Webpack before they added native code or threading.

Webpack remained slow because it didn’t have a good public/private interface for its plugins, so the changes that could be made were limited.

> Vite can adopt the partial bundling of Farm, but dispensing with its remaining JavaScript tools is a major breaking update.

Turbopack and Parcel both have excellent performance without any compromises to their bundling strategy. Vite skipping this likely just simplifies it’s architecture. Bundling creates an opportunity to be slow, but it doesn’t necessitate it.


Rolldown (Rollup compatible) for Vite which is written in Rust and still in active development.


Busy reading this and it's great so far. One comment, you've referenced "tree shaking" a few times without explaining what it is. I think I know but it might help others to explain that before your reference it.


I would just use "dead code elimination" instead. They're the same but that actually tells you what it is.


Since tree-shaking is a common term across frontend build tooling documentation, I adopted it as well.

Dead code elimination in its traditional form also runs during code minification, which is a separate build step from bundler tree shaking. Having separate terms avoids ambiguity.


Tree shaking is the removal of unused exports, a very specific thing for JS. Dead code elimination is a broader term which includes tree shaking, but is usually used for the elimination of code after the compilation (or transpilation/minification in js/ts case) in the front-end world.

A practical example would be that tree-shaking wouldn't remove the variable b in "export function foo(x) { let b = 10; let a = 20; return x + a; }" but if this export isn't imported anywhere, then it would remove the whole function definition. Uglify.js, which does dead code elimination would remove the unused b variable.


Sure, tree shaking is just a very basic dead code elimination algorithm. But there's no reason to give it such a prominent and confusing name. Just call it "basic dead code elimination"! If you must be specific (why?) call it "dead export elimination".


I don't disagree with you, but on the other hand, it was really a hard problem in js (partly because functions having outside context and mainly because how mutable modules are (were, with commonjs)), so it became a huge race for optimization. Now it's really mostly dead code elimination because how much saner es modules are yet the name stays. But hey, we also don't call televisions "bigger monitors with built in spying OSS", names have a tendency to stick :)


Thanks for the feedback!

I struggled with the ordering since each section is somewhat mutually dependent; this is arranged more like a thematic history than a chronological one.

Tree-shaking naturally fits under bundling, and I'm afraid that explaining it earlier will make tree-shaking's explanation itself contextless since without bundling there is nothing to tree shake.

I can hyperlink those references to the tree-shaking section tomorrow so that there is an action for the confused.


Thanks. I did see that it was covered later as I continued reading the article and looking back I see that it's in the TOC as well so maybe I just didn't pay enough attention. I think just hyperlinking to that section would do the trick (and maybe on the first occurrence a small comment like (covered below) but that might not be necessary.

Thanks for the response and more importantly the article! It covered exactly all the points that were opaque to me about frontend build processes. I've also forwarded it to a couple of other backend devs that I know.


What helps me in writing on complex things is introducing a concept with a simple (perhaps even slightly wrong) explanation when you need it, before explaining it in greater detail and clearing up any prior simplifications when the reader is able to grok it fully.

Not to take away from the excellent writing in your original essay!


Thank you for this. This looks like just what I was looking for!


Want to understand this as well.

I was looking at using Rust for some static analysis of JavaScript files recently. My impression is that despite there being multiple parsers written in Rust -- the ones in swc, rslint and maybe others -- they are not as production ready as something like acorn, in terms of passing test262, features, documentation, and simply the amount of example code out there. There are bugs like https://github.com/swc-project/swc/issues/2038 that haven't been fixed after a few years. (I am aware that swc is "production-ready" in the sense that it is already used by Next.js, but that's not a high bar if you look at all the fallbacks and limitations)

These projects also seem to be very broad but don't have enough developers to support them. They are aiming to provide an "all-in-one" solution with the parser, transpiler, bundler etc all in one place. Which means they have perhaps too much work to do. While in the current JavaScript ecosystem, there are many projects for each part of the toolchain, the unexpected upside is that there are many people working on one thing to make it really good. Just look at webpack. It's not the best thing out there, but the sheer amount of features and the support for almost every use case is amazing. For parsers, acorn is very good, because the author can focus on the parser instead of also working on the bundler.

As an end user, I would much rather see effort concentrated to make the toolchain really good to make it more production ready, rather than seeing another tool popping up.


> There are bugs like https://github.com/swc-project/swc/issues/2038 that haven't been fixed after a few years.

I tried all those examples on the swc playground[0] and all of them pass, so the issue seems to be fixed, but maybe they forgot to close it. By the way, swc, oxc, and biome all pass test262.

[0]: https://swc.rs/playground (version 1.6.5)


Their documentation says "passes almost all tests": https://rustdoc.swc.rs/swc_ecma_parser/

Which could be outdated as well.

But if that's true, it's the problem isn't it? If some of the most important information about a critical part of the toolchain isn't up-to-date, how can I have confidence in this?

(Also for my specific use case, I need to set an ecmascript version, something that is common in JS based parsers, but it's not available in swc ecma parser. At least I didn't find it.)

For context, I contributed to acorn to fix an issue I found while using it in one of my projects. But that happened because I extensively used acorn in the first place. I don't quite have the confidence to adopt these Rust based tools yet.


> They are aiming to provide an "all-in-one" solution with the parser, transpiler, bundler etc all in one place. Which means they have perhaps too much work to do.

They seem to re-use quite a lot. I don’t think any of them, besides ESBuild, roll their own transpiler. For example, Farm uses SWC:

https://github.com/farm-fe/farm/blob/main/crates/toolkit/Car...


+1 for this.

I've just watched some videos on this but it's still nebulous to me. It has something to do with hydration, and bundling only components used in the app, but making it compatible with ES modules while also allowing for hot reloading of your app during development, ... or something like that.

I'd like to do more web dev but I find it very difficult to get started in the space as someone with 20 years dev experience in backend and data.


It's mostly a change from js to go and rust. For older tools it's harder to migrate towards rust/go because of their ecosystem (plugins etc) being build around js.

However, vite currently consists out of a different build tool for prod and dev (rollup and esbuild). And they're working on replacing both with the new rust based 'rolldown'


Rspack supports JS plugins made for Webpack. I even think there is some overlap of the dev team.


There is webpack, it's bloated, tedious to configure, slow and does breaking changes in configuration with every version. Configuration is basically write only, then don't touch and hop it still works after minor upgrade. It does solve the whole development cycle from dev server with hot reload (which on a usual fe project is subtly broken half the time). Then there are wrappers for webpack to provide sane defaults to end user. Then there are library bundlers, which don't solve the same problem as webpack.

There is also vite which doesn't bundle for dev environment, but serves individual modules separately and is fast.

Why webpack is bloated you ask? Well, because there isn't single javascript, but a mixed bag of esm, commonjs, vue, ts, css, scss and whatnot used in a combination with one another (you can have a vue file which contains ts script and scss styles and a vue file with javascript which imports ts file and css modules). Webpack tries to solve all that, plus caching and developer webserver, but relies on plugins to do a lot of stuff. So yeah, dutch tape, a lot of.

Then somebody decides it's enough and tries to rewrite the whole thing is rust because it's faster and implements 80% of their own needs on day 1, but gets burned by the long tail of annoying issues.

Since it's not a product in itself, somebody has to sponsor it or make in-house thing and opensource. In-house mainters could move to a next bigcorp and the new shiny thing will not go forward with original vision and same enthousiasm.

So yeah, welcome to frontend. It's like linked lists, buffers or unicode in C, but with a complexity or a distributed org and trying to be as cool as erlang at the same time.


I honestly think it's largely just wanting to be known as author of the popular framework/bundler/engine/linter blah. And that the JS community attaches a lot more importance/status/recognition (or uses social media to talk about the ecosystem more, and that leads to the former?) than happens in other languages.

Rust and Ruby too, to a lesser extent. But you don't see this nearly as much in Python or C++ or Go IMO. (I can name Guido van Rossum and Herb Sutter. But who wrote pytest for example? If it were Javascript and I used it I'm sure I'd know. And fwiw I've worked professionally with python for years and not really JS much at all, but I'd recognise more JS names and hear about people far more for JS than python.)


Basically everyone is redicovering the build tools and minifiers used by compiled languages, that we lost when everyone decided they had to be written in nodejs instead.

Similar to RSC rediscovering JSP, ASP.NET, PHP,...


Is it copyrighted to a fake company? I'm not seeing a "Farm, Inc" in EDGAR search.


Not every company is american


The main website is referring to two languages, English and Chinese.

They have a WeChat group, and a QQ address. So I’m going to assume they are based in China.

This Wikipedia article https://en.wikipedia.org/wiki/Incorporation_(business) has a list of abbreviations used for incorporated businesses.

For example in Norway we use “AS” (Aksjeselskap).

In Germany GmbH (“Gesellschaft mit beschränkter Haftung”, limited liability company), and AG ("Aktiengesellschaft", business association with shares), are the most similar to the corporations in the US.

In the UK they have a bunch of different ones.

China uses WFOE (or WOFE), to refer to a Wholly Foreign Owned Enterprise (WFOE). This is the most popular form of business entity for foreign investors wanting to set up a company in China; it is a limited liability company. The article doesn’t mention what Chinese owned companies in China use as abbreviation.

In fact, on the whole list as far as I can see it seems that only three countries use the abbreviation “Inc”: USA, Canada, and the Philippines.

The article as a whole only mentions a few countries though, so I had a look elsewhere as well.

Another website talks specifically about Chinese company structures. https://learn.sayari.com/understanding-chinese-corporate-str...

> In China, the limited liability company (LLC; in Chinese, 有限责任公司 or 有限公司) structure is generally for smaller and less restricted companies. Chinese LLCs may not have more than 50 shareholders

> company limited by shares (股份有限公司 or 股份公司) structure is generally used by larger companies, including publicly traded companies (which must be companies limited by shares)

> One-Person Limited Liability Company (一人有限责任公司) This type of corporation has similar rights and responsibilities to a standard LLC, but may only be established by a natural person.

And quite a few different ones in addition to those, but none using “Inc” as abbreviation.

Back to the topic of countries where Inc is used.

1. United States: The most prominent user of "Inc." for incorporated entities.

2. Canada: "Inc." is commonly used alongside "Ltd." (Limited).

3. Philippines: Companies frequently use "Inc." to indicate incorporation.

4. Australia: Although "Pty Ltd" (Proprietary Limited) is more common, "Inc." can also be used for non-profit organizations.

5. New Zealand: Similar to Australia, "Inc." is used for non-profit entities.

6. Japan: The term "Kabushiki Kaisha" (K.K.) is the standard, but "Inc." is sometimes seen in international contexts.

7. South Korea: The term "주식회사" (Jusikhoesa) is typical, but "Inc." is occasionally used for international recognition.

8. Taiwan: Companies might use "Inc." in English contexts, though the local term is "有限公司" (Youxian Gongsi).

These countries utilize "Inc." to denote an incorporated company, often within international business contexts.

Anyway. If they really are an incorporated company I think it would be helpful to mention what country they are incorporated in, and provide some kind of registration number that you can use for looking up details about the company. And conversely, if they are not an actual incorporated business then don’t pretend to be.


Yeah it doesn't exist under that name in the US or China, so the obvious conclusion is that it's a fake name given to make it look more American.


Their GitHub Org profile, https://github.com/farm-fe, marks them as Chinese. Maybe search via these references to find their legal entity.


As far as I can tell from the gsxt gov search, it does not exist in china either. Weird.


Any tool that wants to replace esbuild for me will have to write as good, or better, release notes. The cost of broken tooling and bad change logs (in other words, bad versions) are many orders of magnitude more expensive than the cost of staying on esbuild.


It looks promising, just a side a note: pink on white background was really hard to read, especially on phone screens.


idk if its just me but their website is falling apart on mobile


I had the same experience. The page was jumping around and made it very difficult to read individual sections/bullets. I know it's tangential but it's not a great look for a FE tool.


Yep. Same for me. Was scrolling trying to read and it makes a big jump on its own for no reason. That’s a bad look in any case, and especially so when it’s the website for an FE tool.

Using Safari on iOS.


Didn't seem too bad. The header and some buttons on the main page are misaligned but the docs seem fine


[flagged]


Hello Battlestar Galactica fan


care to elaborate?


Im not OP but they are probably referring to the endless cycle of build tooling produced in the JS/TS ecosystem. Due to a multitude of reasons, the ecosystem produces a lot more tools in a much faster lifecycle than many other languages.


JavaScript be crazy. So much success. So much pain. Tools. Tools. More tools, please.


Pain? Every new tool brought less pain and more performance to me... Lately it's just a "npm install" and I'm migrated, no configuration needed.


I'm talking about the pain at any point in time, p(t), over the long and complicated history of JavaScript. It looks like you are talking about the derivative of pain over time: dp/dt.


Well my pain got very significantly lower when I started to use React instead of jQuery and Ext.js, and then even lower with TypeScript. Since then the pain has been so low I ditched C# and switched to TS full-stack. Nothing I tried since then proved less painful.

It's not as good as it was in the Delphi days but that's because of many different target platforms and device form factors, than the technology itself.


I think we have much in common. Yes, I remember ExtJS. But not Delphi.

Yes, I remember the innovation of React and how it was a huge step forward relative to jQuery.

Roughly ten years ago, I used Clojure often, so I got to see the emergence of Om, then Reagent, and more. Somewhere along that time I saw Elm. Now, I see the innovation of Dioxus in the Rust + WebAssembly world. These developments were/are impressive. Part of fully appreciating them means embracing what they do better than JavaScript -- and many of these differences are incompatible with JavaScript's evolution. In a word, these other technologies are better largely because they have found a way to "break free".

If I were to try to "make a point" it would be probably be this: JavaScript, at any point in time, has been both useful and painful. Along the way, it has evolved remarkably well. The pain has lessened. With the birth of TypeScript, the experience is even more improved. But nevertheless, even with all the improvements, in some ways we're still paying down the "tech debt" rooted in some early decisions.

Some of the pain isn't necessarily around fundamental tech debt as much as community splintering. For example, I have a hunch, though I'm not an expert, that the Tower of Babel of JS tooling (such as bundling systems) may be resolved at some point. But I don't think this is likely until something else happens...

I look forward to a future where WebAssembly affords a direct, first-class, interface to the browser and DOM that provides high-quality, zero-cost abstractions. As great as WebAssembly is, there is still a considerable kludge in terms of how data is moved around at the interface between JS and WebAssembly. When this happens, it will arguably be a tremendous achievement -- to decouple web applications from the unnecessary constraints of JavaScript's historical choices.

I have a hope/prediction that stronger "competition" at the level of interfacing between the browser and the choice of language will spur better thinking and tooling. This gives me hope that the complexity of TS compilation + JS bundlers can give way to some elegant tooling, more along the lines of what the Rust community has achieved with Cargo.


Yep, agreed, also excited about Wasm, it's taking so long though... I was so excited I gave a series of talks at local tech conferences about it... In 2017.


I would rather have V8 JIT improvements and code caches, than the whole RIIR.

Maybe instead of using nodejs, use Go, Spring, Quarkus, Micronaut, ASP.NET, Axiom...


[flagged]


Languages have more characteristics than what can be reasonably included in a headline.

"Fast to run, but slow to compile and needs very new compiler, and may have a big-ish executable, but OTOH it won't cause much problems with installation of dependencies - vite compatible build tool"


Not sure what you’re implying. I’m an end user of a product, I couldn’t care less what time it takes to compile the final binary.

Or this thing exists to attract enthusiastic Rust devs to contribute to the project?


"Written in Rust" is often a shorthand for "having performance as a goal" when the tool is new and the target audience is mostly technical and made of early adopters, or people willing to try or contribute to new things.

Perhaps you're not the target audience at the current time.


It's a social filter:

- it brings the attention of people liking the tech

- it associates the tool with the values that come with the language and community

- it's a shortcut for explaining things you know most people will assume about the stack


I am not a huge fan either, but it's at least meaningful in the sense that well written and fast Javascript code is still at least a whole distinctive level slower than well written and fast native code. Other reasons to care about include, native code sometimes solves a class of deployment issues (single binary vs npm package). So while I don't care about Rust per se, the signal about it being compiled/native code is still alluring to me so I welcome this in the headline.


"Written in X" implies a number of potentially desired characteristics of a program.

> I’m an end user of a product, I couldn’t care less what time it takes to compile the final binary.

Then consider if HN is the right forum for you? ¯\_(ツ)_/¯


That's a bit narrow minded, there are people other than programmers who browse HN.


Of course, we all know that.

But what's your suggestion exactly? And if you have one, did you bring it up to the moderator team?


None, my point is that it's a silly thing to say to someone who's not interested in the compilation time. Why ask someone to reconsider if they are in the right forum because of such a opinion? That's not very welcoming.


I disagree, it's not silly. I love it when these things are specified, they help give me context and sometimes are a decider whether I start reading at all.

HN is too broad a forum nowadays and apparently you can't make everyone feel welcome. Which is fine but I'll not accept "silly" or "could not care less" labels. They are subjective and attempting to present them as facts is not productive.


Your comment could be rephrased as:

  fn main() {
    println!("I really dislike trend of “written in X” in a headline. What difference does it make if a build tool is written in X? If X entails “fast, takes small amount of resources and reliable” - then just say so.");
  }


Your comment did not contribute with anything new, you just wrote what the comment above said in some code.


This thread sums up most reactions to "X, written in Rust" articles in recent times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: