Hacker News new | past | comments | ask | show | jobs | submit login
Size of node_modules folder when installing the top 100 packages (size-of-npm.netlify.com)
201 points by vnglst on Jan 9, 2020 | hide | past | favorite | 147 comments



I'm using Mithril on my current project and it includes everything you need in less than 10kB gzipped (vdom, components, router, http client).

Another huge benefit is that it simplifies you application code immensely. Your store and models are simply vanilla JS since Mithril doesn't have reactivity, observables, etc. Mithril will simply redraw all the vdom whenever a user event happens or when its router/http client do something. You can also call m.redraw() if you need the UI to update if you for example receive something from a web socket.

Obviously with this approach the performance cannot be as good as something like Svelte or Inferno, but it's significantly faster than React.

https://krausest.github.io/js-framework-benchmark/current.ht...

Personally I use it with JSX since I can't stand the hyperscript notation.

It's certainly not for everyone and all use cases but I think it's totally valid for a good portion of front end work if you can handle a double edged knife.


>The benchmark was run on a Razer Blade 15 Advanced (i7-8750H, 32 GB RAM, Ubuntu 19.10 (Linux 5.3.0-24, mitigations=off), Chrome 79.0.3945.79 (64-bit))

i.e. one of the fastest consumer machines around. I wonder how much the web would be better if more people targetted a $300 laptop instead of a $3000 one. Or probably a $30 Android instead of a $1000 iPhone.


Yeah but the point is comparing the frameworks among themselves.


Which results might be different on smaller machines than on bigger machines, which have different tradeoffs. Scalability isn't linear.


> Scalability isn't linear

Hmmm would be interesting to do the benchmarks on different machines and see if you are right.


Awesome until you look around and notice you’ve fallen into a pit of despair (tech debt) and have no ecosystem to leverage your way out of it or people you can hire that want anything to do with it :)

There needs to be concerns beyond pure speed and package size to dictate tools. Sometimes real world practical constraints must be considered


> you’ve fallen into a pit of despair (tech debt) and have no ecosystem to leverage your way out of it or people you can hire that want anything to do with it

It sounds like you're saying that the way out of tech debt is to install more 3rd-party packages and/or hire more people, I hope you meant something else.

I use and recommend React at work because of the hiring thing, but for my own personal projects I stick with Mithril. With JSX it's not hard to switch back and forth between the two, at least with TypeScript there to tell me when I accidentally use HTML in my React JSX, e.g. the class attribute when React wants the className property. (With Mithril you use attributes that map exactly to HTML attributes)


You don't need much else besides Mithril and maybe a css library.

As for hiring I agree somewhat but anyone with a couple of years of front end experience should be able to pick it up in a matter of hours.


You’re saying people don’t want to work on vanilla JavaScript? Really?


I’m saying check Mithril npm packages vs react/vue packages, contributed by the people who wanted to work with each framework. Some ecosystems have more work to pull from, more people active and experienced, and more established patterns & tools.


I've been very happy recently with using the Vue.js ecosystem with the standalone vue.js and a few others directly <script>'ed from my applications. Usually vue.js, vuex.js, vue-resource.js and maybe vue-router.js are all dependencies I'll have to add. This avoids the npm circus with the (from my point of view acceptable) downside of, for example, not having single file components. The advantage is a minimal set of dependencies of ~70kb compressed JS added in total and a community where most of the question I had are answered somewhere.


Eh, I wouldn't qualify technical debt as having less mind share. This is like saying code is worse because it's more concise and has less moving parts.


And then there is https://svelte.dev/ which does not even have vdom. And Rich Harris (the author) has pretty strong opinions about it. Check out his talks on YouTube, worth it.


Svelte has many great things going on. I have dabbled with it and will probably use it more seriously on my next project.

My only annoyance is that I've grown accustomed to making micro components with React, Inferno, and Mithril on the same file. Stuff like buttons and such. With Svelte (and Vue when using SFC) you need to create each component in its own file which becomes tedious for more complicated views.


Sapper is worth a look for SSR folks

https://sapper.svelte.dev/


Unless there's more to the story than the redraw methods you described, I would suspect that this approach would not be significantly faster than React if your React components correctly use shouldComponentUpdate and React.memo.


Right, except that in Mithril and other libs you don't need to do that to get similar or better performance.


This seems odd. If this were the case then react with only functional components should be just as fast, but it isn’t.


For people like me who like the hyperscript syntax, or have to use it, I've found this site very useful for converting existing HTML templates quicker https://arthurclemens.github.io/mithril-template-converter/i...


Glad I see someone mentioning Mitrhil. I love it and start to have real world use cases such as Lichess.


will simply redraw all the vdom whenever a user event happens

> that’s going to be painful when I want to filter a live search result page with more than a just few results.


Vdom is not dom. Mithril maintains a previous copy of vnodes (pojo) and only patches corresponding divs when new vnodes differ. Comparing and disposing pojo is cheap. Producing vnodes cheap too, unless you do heavy calculations at render, but that’s true for any framework.


The vdom is not the dom.

I'm using it right now with a gallery of about 2000 items with filtering and search. No problem on an i5 desktop.


Have you tried in IE11? I think that’s the true test of speed for any webapp now.


Why would you do that in 2020?


I thought it might be (mildly) interesting to keep track of how the npm node_modules folder grows over time. So I made this website. Source code: https://github.com/vnglst/size-of-npm/


It looks like the top 100 packages are stored in size/package.json? How do you calculate what the top 100 packages are and how do you update the package.json?


I find this post a bit ironic considering you built a 1.1 MB React app to display an image and a bit of text (and the image itself isn't even included in the repository).


Dear lord it even loads a service worker so it can cache the app.


That is included by default in create-react-app.


It is included but disabled by default[0].

[0]: https://create-react-app.dev/docs/making-a-progressive-web-a...


Where are you getting 1.1 MB from? I'm getting 223 KB over the wire, which includes a 100 KB image.


I cloned the repository for the site. Over-the-wire, though, I do get around 227 KB.


I don't find it ironic anymore, I find it the state of affairs in modern development environments. I'm always surprised how quickly people jump to using massive scaffolding, systems, or systems of systems to accomplish things that need about two orders of magnitude less complexity.

Then again, it seems people that develop these monstrosities are never the ones to maintain them.


You're missing out the bug picture. All my projects started small. But day after day, you need a be feature and a new thing. It will grow. At that point you need to add these tools yourself, manually.

I rather have a standard baseline to be up to speed and don't get bothered with this on each app.


> You're missing out the bug picture.

Is this a Freudian slip? The amount of bugs in "modern web apps" seems rather huge indeed.

> I rather have a standard baseline to be up to speed

To me, the baseline for any kind of website or web app is, you know, HTML. When authoring static HTML or rendering it dynamically on the server, I'm up to speed instantly.


The graybeards called that "creeping featuritis."


Don't underestimate managers who "encourage" unnecessary complexity to boost their own profile. Their direct reports have to "design" and then implement and maintain them. Then the manager gets a higher profile for managing teams that build and design "critical", complex systems.

There is a lot of blame to go around in this industry--it isn't just individual engineers acting out of ignorance or resume building.


built in job security (if not for the original dev), but for those that come after the dev0.


I mean that's smaller than the average of the top 100 modules.


Yarn folks try hard (it seems) to get rid of local node_modules and link directly to user-wide multi-version repos, but met the resistance from tool maintainers who just walk that dir directly. I can’t remember specific details now, but yarn pnp didn’t work for me, because of browserify and co apparently.

As of the cache size, I simply do not care. E.g. installing gnome, kde or openoffice requires to download hundreds of megabytes of binaries and everyone is okay with that. If you’re not familiar with these packages, they draw large ui controls, copy files and render bad tables. Not much different from a regular node project.


Big difference though, I'd say, is software from those respective foundations are generally better tested, better documented, and less likely to have security holes than the average npm installed module.

I don't think they ever got "left-padded" as they say.

Not that your overall point is wrong (most definitely, it is not in my opinion). I think the reality is that this is what actually drives the concern of the size of the node_modules folder, because deep down, all frontend developers know that so many of these packages have never been vetted, probably won't be vetted, and there is alot of duplicate functionality in many of these dependencies, or the libs themselves just aren't designed to be efficiently consumed. Other communities (like Python) have been having community level conversations around the size of their respective package folders & the validity of reproducing their builds, security etc for awhile now, and the governing bodies for those respective communities (and/or language) are working hard to reduce redundancy among packages & encourage cooperation (the PSF, for instance, has funded directly or helped facilitate funding for many keystone packages, like flask, IIRC. It is also my understanding that Microsoft has been doing (and is expanding its efforts to) do this via the https://dotnetfoundation.org/ among other efforts)

Sadly, I have not seen the same effort within the NPM/frontend focused ecosystem. Though, maybe the open JS foundation will take off: https://openjsf.org/projects/


> E.g. installing gnome, kde or openoffice requires to download hundreds of megabytes of binaries and everyone is okay with that

Come on. Gnome and KDE are not equivalent of a node project, they're an equivalent of a browser (or at least browser chrome). And OpenOffice does quite a lot more than pretty much any Node project out there.


That depends on where you draw the line. If gnome is not gtk/glib, and if kde is not qt, then it is apples to an orange. If they are, then your comparison is correct.

I mean, making chrome is not a big issue if you’ve got yourself webkit/blink/presto/etc. it is a regular program, pretty stupid one, if you think its ui out of box.


npm has been doing work in that area as well: https://npm.community/t/tink-faq-a-package-unwinder-for-java...

(That said, I think the people working on that have since been laid off or quit, so not sure what the current status is.)


Maven figured out how to make this work almost 20 years ago.


It’s not as hard to figure out how to make this work, as to negotiate it with everyone once the practice is established. Yarn pnp works, but its side-effect is empty node_modules, which breaks everything else.


Fwiw our PnP implementation is now much better than a year ago (it was still experimental, after all!). Expect to see more in this space in the next few weeks...!


I used create-react-app recently to get up and running with React to learn more about it and try using it. It works as advertised. However I also noticed that I now have ~1500 dependencies in node_modules and it weighs in at about 214MB. I thought that was mildly interesting. Magic ain't cheap.

EDIT: Silly me, I was looking at the audit number. It did seem ridiculously big.


Personally after having been bitten by some scaffolding tool I now configure everything from scratch. Webpack is not really as complicated as it seems and I prefer having a tailored suit instead of some generic pret a porter solution.

Whenever I start working with a framework I create my own starter kit which I reuse, I don't start from scratch on every project.


I have used CRA for so long that I have forgotten how to manually setup Webpack.

Will try to relearn again. Hopefully, the new versions are not that bad.


> Hopefully, the new versions are not that bad.

They are marginally better but still the user experience is horrendous. I think a large part of it is opting for massive nested configuration objects instead of something more similar to Gulp. You have to keep mentally parsing the object to figure out exactly what webpack is going to do exactly.


It also includes Webpack which does tree shaking before the final build aka minification. It will only include the dependenices that your code needs. Should significantly reduce the JS artefact size.


create-react-app includes a lot of stuff you don't need. As a data point, the ejected webpack config is 600+ lines with around 5 helper scripts. My own webpack configs are generally closer to 100 lines with no supporting files needed.


> 906,380

literally? (if not, what's the actual figure?)



create-react-app is just the CLI to bootstrap the project, you want to look at react-scripts instead:

https://npm.anvaka.com/#/view/2d/react-scripts


So... is all this stuff actually Javascript? Or are these, say, .po localization files, CSS+PNG template files for client libs, etc.?

Or, even worse, are these docs, tests, and other stuff that have no place in the checkout of a recursively-resolved dependency?


> Or, even worse, are these docs, tests, and other stuff that have no place in the checkout of a recursively-resolved dependency?

Bingo. I've seen an embarassing number of what should be dev-dependencies pulled in transitively for some packages, as well.

Nothing like seeing some four layer deep dependency pulling gulp-cli because somebody didn't know what they were doing...


I hope (and expect) that there will be larger libraries that implement the functionality of a lot of very small packages. JavaScript really needs a comprehensive standard library.


On the other hand it tends to be harder to change things in a standard library (ie that is where good code goes to die) for want of not breaking dependencies.

If all libraries are tiny then a set of “the best” libraries can evolve over time through the preferences of individual people.

Obviously this doesn’t work so well for finding libraries or having them work great together.


Most languages have standard libraries and things are fine that way. Most NPM modules have a scope that’s too small. Instead of 100 string modules they should get merged into one string library.


Large standard libraries are a major pain for evolving the language and the libraries in question. (E.g. Python's standard library is notorious as "where modules go to die"). Putting each piece into its own library is great as long as your tooling can manage them effectively.

What could be useful is some concept of a "metalibrary" or "platform", a la the "rust platform" and "haskell platform" efforts - a sensible aggregate of libraries with compatible versions that work together and cover most of the common things you need - not as an inflexible monolith that you have to do big-bang upgrades of, but as a starting point that you can then customize as needed. Almost like a Linux distribution. But keeping the ability to upgrade one small piece without touching the others is really important.


Things like string manipulation, array manipulation or similar are pretty well understood and could be put into a comprehensive library. On the few more projects I have seen there were some trivial packages like “isodd” or something like this which then recursively pulled in dozens of other trivial stuff. This just doesn’t seem to be a problem in other languages.


Major upgrade problems because of supposedly simple, well-understood library matters like string manipulation or array manipulations are definitely a problem that other languages have - just look at the whole Python 3 saga.


.NET doesn’t have that problem or C


C doesn't have even basic string operations in its standard library. It could do with a lot more libraries but is hampered by not having any kind of dependency management tooling. Where there are monolithic libraries, upgrading is absolutely a problem, but a lot of functionality in C is just not there, which ends up with a lot of duplicated code, snippets copied and pasted or simply reimplemented in every project, and all the associated maintainability and security problems.

.net I'm less familiar with, but I've certainly heard of projects being stuck on older versions of it, and again for a lot of tasks it tends to be a case of there just not being a library available at all.


.NET suffered from the fact that you couldn't move easily between versions. This changed with .NET Core, where you can target (and install multiple versions of) different instances of the runtime/SDK.

That, combined with the ability to target .netstandard and a much more clear line of sight for what is compatiable with those standards, have made this, to me, a non issue.

It was historically though, very hard to deal with for large apps.


Again I'm not really part of that ecosystem, but it sounded to me like .net core/standard involved doing exactly what Ididntdothis was arguing against - moving functionality out of the language standard library and into separate libraries with their own independent versioning.


before I say more, I just want to clarify I meant no ill direction or intent your way. I genuinely think there is a fine balance to be struck between a good stdlib & just moving things into the broader module ecosystem for any language. Some get it more right than others (Go Lang, for instance, I think is cool, in that it has the stdlib, but if I understand it correctly the stdlib is maintained as separate installable libs)

Microsoft hasn't move much out of the core C#/.NET libaries, rather they made it easier to interop the SDKs/runtimes. The actual libaries themselves haven't changed much in respect to this issue.


The reason I don't believe it's a balance is simply that I haven't seen a downside. The world has drifted towards microlibraries largely because of real advantages, brought on by automated package management making them practical. People here seem to be grasping at straws to find a justification for disliking that change.


I made a repository to track what I consider to be the best Node libraries: https://github.com/blewisio/hqm. It's just a readme and directory (for now), but serves as a handy reference when trying to find a good library that does some particular task.


Why would you want the libraries you depend on to change??


They have bugs, and fixing them is good? They have better implementations, eventually, and improving performance is good?


The world changes, my friend.


I mean, we used to all just use jQuery and lodash. Then the cool kids decided that wasn't cool anymore, and every function should be a separate package.

I'm sure the wheel will come around again.


Show us this, "the wheel"


man tough crowd :)... I thought more of you would've gotten the Futurama reference ... Fear of a bot planet.. I think..

edit no, the one with the evolution of robots and such


That's because there's no point including an entire library if you only need a few functions. It means that your deployed bundle is much smaller, which means that users don't need to download unused code and waste time/bandwidth (ie a good thing).

Tree shaking still isn't that great or widely available, so separate packages mean you can do this manually.


> Tree shaking still isn't that great or widely available

This sounds like an opinion based on the state of web development a decade ago.

In 2020 on the other hand the Google Closure Compiler has been a mature project for years, and even projects not explicitly focused on dead code elimination (bundlers like Webpack and Rollup) do a decently thorough job of cutting out unnecessary code with the right plugins.


If there is a set of standard libraries, wouldn't they be cached in the browser anyways?


Like underscore or jQuery?


Probably lighter than Rails, whichs feels very heavy trying it again after a while. A fresh new rails app with devise and postgresql client nets me a vendor/bundle folder of 183MB. Of course it needs an install of node as well.


Tired of all the bloat in JS, with the distributed app weighting a whopping 2 MB, I said to myself &#@£ this, I'm going to make a native app. What I ended up with was a 300MB statically linked binary - to show "hello world".


I guess you are trolling, because a native app can print "hello world" in around 100 bytes.

That's a difference of more than 6 orders of magnitude :')


It is true that a web app needs a browser. But a native app needs an operating system (OS). Something that I find fascinating though are apps that you can boot into, eg. they do not require an OS. But if you want graphics, mouse support, play sound etc, it becomes quite complicated. Compare that to a web app where you get all that functionality for free, with no libraries needed. And in between we have the shared lib utopia with Debian et.al and dll-hell with Windows. Each stack have their advantages and disadvantages. I however don't see bundle sizes on the web as a huge problem, at least not yet. With service workers you can download the app once, then even use it offline. Everything is just one click away and can be reached via an URL. And if you stick with "vanilla" it can also be fast and lightweight.


Wait, what?

So you think a kernel is large and has to be counted in the total, even if a browser also requires a kernel.

Yet you think web apps don't need libraries, so you are not even counting the browser and its dozens of libraries?


What stack did you use?


Dlang with bindings to a C GUI lib for Linux.


300MB; so probably golang with a whole lot of go packages.

But for real. Terraform is like 300mb for the binary, easily the largest single binary on my system. (And it’s golang)


> statically linked

Why?


I wanted to optimize startup time. And I was thinking it would be easier to distribute with just one "big" executable.


I think about this stuff from time to time... If we think out into the future (50-100 years?)... will there eventually be such a huge footprint for frameworks / packages that there is nobody who remembers how some of the old dependencies were written? Interesting to think of a future bug where nobody would know the solution because it lies so deep in a dependency somewhere :)


You might find a talk[0] from Jon Blow on this very subject from last year interesting.

[0] https://youtu.be/pW-SOdj4Kkk


Jonathan Blow publicly criticizes Twitter engineering for being unproductive because they haven't released many user-facing features. He regularly berates the entire web industry without knowing anything about how it works.

His knowledge is limited to an extremely tiny domain of programming, and he regularly berates people for not following his philosophy. Meanwhile, it took him eight years to make The Witness. What did he build with his eight years time? A first person puzzle game with essentially no interactive physics, no character animations, and no NPC's to interact with. (I actually enjoy The Witness, fwiw.) The vast majority of developers do not have the budget to spend eight years on a single title, and wouldn't want to even if they did.

The most notable thing about Jonathan Blow is how condescending he is.


If that's what you take away from Jonathan Blow then you need to detach from your emotions a bit. He annoys me with how much his condescending attitude towards web development (because I'm a web developer) - but he justifies his positions and is open to argument about it. His talks (like that one linked above) are really inspirational, and he's released two highly successful product: two more than the majority of people you'll ever meet.

He's passionate and smart and interesting - and writing him off like that, I think, is not justified.


I thought much more highly of him when I didn't work in games and I was a web developer. Now that I work in games I don't think very highly of him. He's an uncompromising fundamentalist that sends would-be game developers down impossibly unproductive paths of worrying about minutia that will never matter to their projects before they've ever built their first game. He's famous for being a rude guest in both academia and in conferences. He's basically the equivalent of someone that says that you should NEVER use node because node_modules gets big and if you're writing a server it should be in Rust and if it isn't you're bad and you should feel bad. His worldview does not at all take into account the differing problems faced by differing people, having differing access to resources and differing budgets and time constraints. He is _only_ concerned with how you should work if you have no deadlines, an essentially unlimited runway, and your most important concern is resource efficiency. For most games projects, the production bottleneck is not CPU or GPU efficiency: it's the cost of art production. What he has to say is, for the vast, vast majority of people, not applicable. He is, essentially, an apex predator of confirmation bias.

The thing about Jonathan Blow is that for a lot of people, he's the first graphics-focused outspoken programmer that they run into and so they think he's some form of singular genius. He isn't.


That is a plot point in Vernor Vinge's Deepness in the Sky


I bet in corporations this is already happening. But in public frameworks and such it will never happen since each generation wants to write their own framework.


Programmer/Archeologist


It would be better if it showed how much each of the top 100 packages consumed when installed alone (along with their dependencies), as well as how much weight they single-handedly contribute to the 100-package-install total size (i.e. dependencies pulled by only them and no other package in the top 100).

Is the average weight high? Or are there a couple behemoths?


Cool! You could even scrape the history of this. If you take a date that you want to scrape, then you resolve the top-level packages with their release date that you'll find next to the version in the package.json. Use the latest found version that is before the date you're trying to scrape for.


Do hate this about javascript. one of the main reasons on the web side (apart from js not having a good STL) however is enterprises are still locked in to IE11 which requires all these tranpilers/bundlers to use modern syntax which usually are the heaviest of all of these dependencies.


Vue-cli at least has a very neat feature; if you build with the —-modern flag it will create two versions of your scripts, one with modern syntax, smaller and faster for modern browsers and another legacy version for older browsers you have to support. It then uses the nomodule html attribute to automatically load the version your browser needs. This also has the nice effect of saving you bandwidth costs.


Been spending the last few days building a new site for a long running project I own.

For the server, I’m using hapijs. It fits my style a whole lot better than Express.

For the client side, it’s entirely just raw css and js. Absolutely no frameworks.

Eventually I’m going to have to use a few third party tools when I get around to adding square or braintree integration but that’s a way off.

It’s an absolute joy to just sit down and get stuff done. Today I was able to move from getting the basic node server running in under an hour to building out a couple pages and writing some content. Added some css styling like back in the old days and without needing less or sass. Still only about as good as you’d expect from one day of work but it was so easy to do.

I didn’t spend hours setting up tooling, researching which extra npm modules were needed, etc. There’s no React garbage, no Everest of overhead just to get to a point where I could work.


Your frustration with React and the npm ecosystem is common but easily remedied; try NextJs.


NextJS has been an absolute, undocumented nightmare at our company we've gone back to CRA and "hand rolled" SSR to retain our sanity.

It's one of those modern magic frameworks that when you need to step outside the simple hello world example app you spend most of your days chasing buggy behaviours mentioned on unresolved month old github issues.

When this happens, usually your only choice and suggestion is update to the beta version of some core library, which breaks tons of other packages that haven't yet been updated to the new rewritten API, rinse and repeat.

Never again.

To be honest, NextJS wasn't worse than the rest of the JS ecosystem, the problem is systemic.


Yikes. Your experience in no way matches mine, nor that of many others I've worked with over the years. Using NextJs doesn't absolve you of the responsibility to understand the fundamentals of how your webapp works, but IMHO / IME it strikes a great balance when it comes to DX and convenience, with direct extensibility and support to drop a layer down to do things by hand; IOW, yeah there's some magic but it's the good kind. I'd be very interested to hear more about your "nightmare", and what circumstances led to the painted-corner of depending on beta versions of core libraries. Care to elaborate?


If you think this is bad, it's going to get 1000x worse with WASM. A lot more rope there. You will see the Linux kernel being pulled in because someone used system threads.


Does anyone know what that late November spike is all about?


Since this tracks the combined size of the top 100 npm packages, my guess is that those spikes are caused by packages entering and leaving the list.


That would make sense. Probably the number 100ish spot got replaced by a much larger package.

My initial thought was a widespread security patch or something.


Quote from the bottom of the website (easy to miss):

> On my Macbook Pro the folder containing Xcode is larger than 13 GB. And to get the .NET application running at work I had to install over 20 GB of applications and dependencies. So, relativity speaking, JavaScript is still ok.

Personally, I completely agree with that. A single gigabyte of hard drive space on a developer's machine or a modern server is not terribly wasteful.


But also, the point about .NET is complete garbage. The full .NET 3.1 SDK download for Mac OS X is around 100MB. Based on that hyperbolic statement I'd be hesitant to trust anything else on the site.

EDIT: I know he says "the application at work" and it's just an offhand comment, but you shouldn't read too much into it. Unless it means "I had to install Windows 10 on a VM" because it uses classic framework libraries that don't work with Mono or something.


I can't find the code that does the actual package installation. Does it do deduplication / linking of modules, or does it install full tree?


It runs on circle CI (look at the .circleci folder). Half the logic is in there, the other half in ./size. I am on mobile so I cannot check the methodology, but it seems like the CI is committing the stats back to the repo


I think yarn workspaces and lerna can help reduce a lot of teh redundancy in your project(s).


Our cooperate Cisco security filter blocks the website `size-of-npm.netlify.com` for a "security threat".

NET::ERR_CERT_AUTHORITY_INVALID is the listed error


Note to the author, less memes and better information... What's the chart in? Megabytes I presume?


It says MB on the y-axis label.


Edit: my bad, the dark mode plugin didn't invert the legend text! Black on dark gray...


The unit is on the y axis.


It seems to me that this website could be a completely static site that is automatically regenerated once a day with a new graph. Instead you seem to be succumbing to the exact same problem you are highlighting. This a good example of how the culture of software engineering is moving in the wrong direction. We need to all make a stand and say no more to this culture.

Of course, that’s made difficult by the fact it’s against our own collective interest. At this rate even the worst developers will be able to be employed managing some gargantuan and wholly unnecessary software stock.

Not to knock on you to much, but this culture of bloat reminds about the classic “Evolution of a Programmer” joke:

https://www.smart-jokes.org/programmer-evolution.html

Let’s all try and be like the Guru Hacker, not the Seasoned Professional!


It took more time for me to read your complaint about "bloat" than it took to load the site, read it all, and come to the comments. This type of elitist posturing is more corrosive to my enjoyment of this profession than any poorly optimized website.


> We need to all make a stand and say no more to this culture.

I don't think the path forward is to bash all websites that use a common npm library.


Unfortunately the culture I’ve experienced in the past few years has been essentially that, but for anything that does not use a common npm library. “Why is this not done with React/Vue/Typescript/Angular/<insert favorite thing here>?” is a question that’s posed less as a question and more as a condescension in many cases. The rebuttal “because not using XYZ works just fine” is not often accepted, true as it may be, followed by bashing the non believer in question into submission.

I’m with you though, I don’t think the bashing is necessary, in either direction. It just breeds contempt.


This is why I don't.

Vanilla JS! 0 Dependencies!

Let the 1995 revolution begin!


Nothing wrong with modularizing code or pulling in complicated things as a dependencies. Rewriting the same tools a million times isn't a great world either. The only thing I avoid/take issue with is pulling huge dependencies to use only a tiny part. Like I wouldn't base anything designed like this page in my projects, it pulls in 1 MB of React to dynamically display an image that updates once a day.


Have fun building anything remotely complex.


I've had great fun building my canvas element library entirely in Vanilla JS. The work has helped me discover parts of the JS/DOM ecosystem that I never realised even needed to exist.


A lot of apps aren’t being built on frameworks that aren’t remotely complex.



Unpopular opinion: 1GB is not a lot of data for most people in the first world.


Most people aren't living in the first world, and don't have high bandwidth / low latency connections with which to make pulling down 100s of MB of javascript a non-event.

While, yes, they can choose to just not use such tools or libraries, it also presents a fun barrier for learning.

It's also perfectly fine to not give a damn about either of the above.


It's worth noting that node_modules is for project dependencies and most of it won't be included in the final assets. Many projects make use of build tools, command line utilities, testing suites and other libraries in development which don't get deployed. Even for actual app dependencies, many packages include source code, type definitions and multiple choices of builds which inflate the size of the node_modules folder but don't get used in production. So hundreds of megabytes of dependencies can easily be used to make something only a few megabytes in size.

Your point still stands of course!


Um. So as someone living in not the first world, let me give some perspective. People here do not use “first world” sites. Even without the JavaScript, sending bits across the ocean just to see an image takes too long.

Everyone uses local nation sites, some of which even use languages spoken by no more than 5 million people world wide.

We aren’t the target market for the “first world” website, and that’s fine as the first world isn’t the target for any of our digital goods either.

It just feels a bit patronizing when this discussion is brought up, as no one is dying to use a random American made website. Facebook and Google are exceptions, not the rule.


It's also not a new problem.

I remember downloading a 150mb JDK overnight on dial-up for Java 2.


Most people are not living in the US. Just my target audience. It will never be a global enterprise, so who cares if people elsewhere have issues. As a developer I am not spending time on documentation translation (or any for that matter)


well usually a framework minimizes and scaffolds itself down to a few js files, css, and html; front end won't ever see the node_modules; only the backend will and that doesn't matter to the user


I wonder what percentage of React/Angular/et c contributors live in places where 1GB isn’t a lot of data.


I think that's a relatively popular opinion and not what people mean when they point out the size of the node_modules folder is huge.


I don't really care about bytes of my disk being used. But inodes of my filesystem are never going to be cheap. If there are so many little files in the node_modules dir that it takes ~3 minutes to run `npm install` or to `rm -r` the folder thus created, that's a bit silly.


We use shared dev VMs at work to help ensure consistency, and have actually run out of inodes several times solely because of node. The only error message you get is about disk space, so it's a really confusing situation for people who haven't seen it before.


I am in Australia, which is "first world" (I think you mean "developed") on a good day.

Over the new year, I was in an area where all power and phone communications were first limited, then cut completely. I was able to send/receive 30 megabytes of data in about 8 hours.

I understand there's always competing constraints and priorities, but shouldn't we be striving to do the best possible job?


I guess what I meant is that we should be doing a better job on bandwidth; a few hundred megabytes of deps isn’t a big deal in the scheme of ram/storage these days.

Hopefully things like Starlink will improve the situation a lot.


So, if gas was 10 cents a gallon, buy a fullsize truck?


Better: If a 64GB SSD is $20, use a 20GB OS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: