Hacker News new | past | comments | ask | show | jobs | submit login

If I'm understanding (and agreeing with) your gripe correctly, isn't it two solutions to the same perceived problem?

My experience is that the world of Python dependency management is a mess which sometimes works, and sometimes forces you to spend hours-to-days searching for obscure error messages and trying maybe-fixes posted in Github issues for some other package, just in case it helps. This sometimes extends further - e.g. with hours-to-days spent trying to install just-the-right-version-of-CUDA on Linux...

Anyway, the (somewhat annoying but understandable) solution that some developers take is to make their utility/app/whatever as self-contained as possible with a fresh install of everything from Python downwards inside a venv - which results in (for example) multiple copies of PyTorch spread around your HDD. This is great for less technical users who just need a minimal-difficulty install (as IME it works maybe 80-90% of the time), good for people who don't want to spend their time debugging incompatibilities between different library versions, but frustrating for the more technically-inclined user.

This is just another approach to the same problem, which presumably also presents an even-lower level of work for the maintainers, since it avoids Python installs and packages altogether?




I get that, my issue is when the model is coupled with the app, or the app just presumes I don't have it downloaded and doesn't ask me otherwise. This is like basic configuration stuff...

What I suspect is happening is that people are cargo-culting zero-click installations. It seems rather fashionable right now.


I don’t think making it easy to install is cargo-culting. In my case it’s an accessibility thing. I wanted a private alternative that I could give to nontechnical people in my life who had started using ChatGPT. Some don’t understand local vs cloud and definitely don’t know about ggufs or LLMs but they all install apps from the App Store.


In the README of the project (the TFA of this whole thread) there is the option to download the app without the model:

"You can also also download just the llamafile software (without any weights included) from our releases page, or directly in your terminal or command prompt"

There is no cargo-culting going on. Some of us do legitimately appreciate it.


Which has been followed, and this comment was not a response to this specific app but rather a general trend I've noticed and was mentioned at the start of this thread


I was answering to this complaint: "my issue is when the model is coupled with the app".

In this specific case there is an option for you that addresses this complaint, where the model isn't coupled with the app.


Is this the sentiment around?

Is having everything normalized in your system that worth it? I would say having (some) duplicates in your system is mostly fine, better that having some spooky-action-at-a-distance break things when you don't expect.

I expect the future is something like Windows's WinSxS, NixOS's /nix/store, pnpm's .pnpm-store where that deduping isn't "online" but it still is somewhat automated and hidden from you.


> Is this the sentiment around?

Yes? It's right here, at the least.

And if that's the future, then the future sucks. We can teach people to be smarter, but no, instead our software has to bend over backwards to blow smoke up our ass because grandma.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: