Hacker News new | past | comments | ask | show | jobs | submit | more pstch's comments login

> The Filesystem Hierarchy Standard (FHS) defines the directory structure and directory contents in Linux distributions.[1] It is maintained by the Linux Foundation.

https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard


As said by the other commenter, most of these patches are caused by NixOS not using the FHS (except for /usr/bin/env), which can be considered as a good thing in itself. In an ideal world, it would not be a problem (programs should not assume anything about the filesystem hierarchy, and should always use $PATH or their own environment variables).

The issue of integrating with other language ecosystems is indeed very problematic (and it's a very hard problem), especially with regards to the chain of trust (by the way, it's not completely broken, you can still read the original package hash in the derivation, and evaluate this derivation to get the Nix derivation hash), and with regards to the fact that including all of the packages of every ecosystem in `nixpkgs` is not doable.

There is also the (related) problem that evaluating a configuration implies evaluating `nixpkgs` itself, which requires a good amount of available memory, and this can only get worse as `nixpkgs` grows.


I don’t know about Erlang but typically language-specific packages are handled with a tool that converts from the language-specific package management system into Nix, and that tool will do whatever validity checking you normally expect from the language package manager. For example, for Ruby, it uses bundler under the hood which will use the Gemfile.lock, and then it converts the results of that into Nix expressions.

Someone could certainly submit a PR to the nixpkgs repo that purports to do that but really modifies the generated Nix expressions to refer to different sources, but this would be discovered by anyone who re-runs the package update process.


There are lots of things to be said about Nix and NixOS, and many of these things have been mentionned in this page, but there's one thing I really like with Nix : the fact the the configuration is expressed as a function that takes its own result as argument, and that the final configuration is the fixed point of that function. This is a very powerful concept, as these functions can be composed together.

I think this idea is not specific to Nix, and I wish it was used a lot more in configuration languages. For example, it was something I really missed when using Ansible. The lack of this feature means that writing the inventory is much harder than it needs to be, and I've seen horrible hacks that try to get around this.


Steam itself works pretty well, but I've had many games just inexplicably segfault on startup. It's weird because I get the same segfault: - using steam-run - in a FHS user environment - in an LXD container (on this NixOS host) running Debian 10

The same games run fine on a Debian 10 host, all of the libraries the link to are bit-identical to the ones in the LXD container, and I've even tried running the exact same kernel.

I'm lost here, as I feel like I've made everything match (at least in the LXD container): the kernel is bit-identical, the games are bit-identical, the libraries they link to are bit-identical. There must be something I'm missing.


Try strace and look at what the is loaded by ld.so?


Is the ballot for olympian medalists public ?

What are the incentives for someone not to vote 99 for the candidate he wants to win, and 0 for all others ?


There's nothing wrong with that. You can also decide to vote against one candidate by giving everyone else 99, and that's okay too.


The page you linked takes 8s to display on my browser, even on subsequent reloads, just because I don't allow third-party scripts. It also has no displayed images, for the same reason. I really don't wish more sites were like this.


An interesting - but not surprising - thing about this is that compression algorithms can be more efficient on wider representations of numerically-high code points (e.g, for some Korean corpus, using UTF-32 instead of UTF-8 improves LZMA compression by ~10%).


How well does that corpus compress with LZMA if using a Korean specific character code (such as EUC-KR)? And what about other combinations, with other character codings and other compression algorithms?


EUC-KR doesn't improve much with LZMA (2% over UTF-16), but is better with gzip-9 (10% over UTF-16). I haven't studied this extensively, just did a few tests when waiting for it to download.


This site cannot be viewed without third-party JavaScript enabled (it shows a blank page on both Firefox and Chrome), and this is happening more and more on HN links. I think it's pretty sad that so many websites are adding a client-side dependency on third-party code just to view the website. In this case, this doesn't seem intentional, as the code is full of <noscript> tags.

I also cannot begin to understand how can a <body> tag's class definition can take 4400 bytes. In what kind of situation do we need to apply 146 CSS classes to the <body> tag ?


It's a squarespace site, and this particular behavior is pretty universal to squarespace sites, near as I can tell.

Since it's due to the squarespace, I wouldn't really characterize this as "third-party" javascript per se, but I agree it's annoying the page needs javascript to even render. Boo on squarespace.

If you don't want to enable all third-party scripts, try uMatrix (https://chrome.google.com/webstore/detail/umatrix/ogfcmafjal...). It has very fine-grained control over what assets you allow from where (it's why I knew offhand this was squarespace). A warning though: it's got a bit of a learning curve, and depending on how restrictive you want things, you will probably end up spending a fair amount of time un-breaking the internet.

Needing third-party scripts isn't necessarily evil in my mind though--aside from squarespace-like cases where a page loads scripts from the underlying platform (squarespace, or custom domains on top of medium), the other common case I see is loading scripts straight from cdnjs or similar. Is it really evil or insecure to load jquery from cdnjs?


Strange, it works for me on an ancient version of Safari (9.1.3) that's kitted out with ghostery and JS blocker. Most modern sites don't work on it, but this one did.


That's probably because that version of Safari ignores this little bugger:

    .site.page-loading { opacity: 0 }
I used the developer tools in Firefox to disable that one and the page was instantly viewable without JavaScript.

That's right, the site uses CSS to hide all the content and then presumably reenables it somewhere in the gobs of JS, maybe after its loaded whatever other analytics/tracking crap there is. Absolutely vile.


It makes me sad that this is (currently) the 2nd-top-voted toplevel comment on this post.

Can we stop whining about how people choose to present their work and instead discuss the (in this case, really cool) work instead?


People are rightly complaining about the accessibility (or lack thereof) of the content --- it needs to be nothing more than a static page, and in fact would be perfectly readable as such, if it weren't for one line of CSS that hid the content unless JS was run.


By "third-party", I presume you mean anything originating from other than the site's domain?

I haven't actually checked the site in question (on mobile, kinda tricky), but isn't this OK as long as a subresource integrity hash is used?


My main gripe is that JS is required to view the website. It doesn't work on my mobile phone (which is rather old, I admit), and it takes a huge amount of CPU time to load on my X60.

I really like that I can still use HN on this phone, but it's more and more frequent that I cannot open the links themselves.


As I understand it, the CIP is opt-in, so it is truly voluntary.

EDIT: Well, it's supposed to be opt-in, but apparently Intel has been sneaking it on user computers without their approval, which suggests that the checkbox used to install it from the driver assistant might be checked by default... so not opt-in.


KSM is also not totally free, a good amount of CPU time has to be spent to search for merging opportunities (this amount scales logarithmically with the amount of pages to scan).

There is also a simple security implication, since you now have the possibility to over-allocate your physical memory, and suffer great consequences if many pages are unshared at the same time. Merged pages can be swapped out, but since they need to be scanned again when they are swapped in, BEFORE being merged, there is a great potential for memory pressure spikes in some configurations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: