Hacker News new | past | comments | ask | show | jobs | submit | phlip9's comments login

Maintenance is much more practical when you use the versions upstream tests in their CI and not whatever mishmash of ancient/silently incompatible deps that each distro separately decides to combine together.

Upstream CI isn't magic that only they have. Distributions run the same tests upstreams do if they are made available.

If tests falsely pass, then that's a quality problem in the upstream tests.


Can definitely recommend eza (prev. exa). I've used it as an ls replacement for a long time with zero problems. If anyone's using nix home-manager, here's my config for inspiration:

    programs.eza = {
      enable = true;

      # In list view, include a column with each file's git status.
      git = true;
    };

    programs.bash.shellAliases = {
      ks = "eza";
      sl = "eza";
      l = "eza";
      ls = "eza";
      ll = "eza -l";
      la = "eza -a";
      lt = "eza --tree";
      lla = "eza -la";
    };


Agreed. A while back I played around with fuzzcheck [1], which let's you write coverage-guided, structure-aware property tests, but the generation is smarter than just slamming a fuzzer's `&[u8]` input into `Arbitrary`. It also supports shrinking, which is nice. Don't know that I would recommend it though. It seemed difficult to write your own `Mutator`s. It also looks somewhat unmaintained nowadays, but I think the direction is worth exploring.

[1]: https://github.com/loiclec/fuzzcheck-rs/


Congrats on the release! I love the focus on devex w/ typescript and autocomplete. That's probably one of my biggest pain points with Nix -- writing any non-trivial package always requires a ripgrep adventure through nixpkgs. Finding the right poorly documented and poorly discoverable derivation attributes is always such a chore.

What are your plans for cross-compilation or heavy package customization? One of nixpkgs coolest party tricks imo is that you can just change the stdenv and get a musl static binary or cross-compiled binary.


> What are your plans for cross-compilation or heavy package customization? One of nixpkgs coolest party tricks imo is that you can just change the stdenv and get a musl static binary or cross-compiled binary.

So in general, I don't think I'm going to have anything quite as powerful as Nix's overrides. But I'm hoping most of the use-cases for it will be covered by some simpler options:

- Since build definitions are functions, package authors can just take arguments for the things they want downstream users to be able to customize (e.g. `configure` flags, optional features and plugins, etc.)

- I haven't built it yet, but I think adding support for dependency overrides would be fairly easy, a la Cargo. Basically, you'd just fork or clone the package you want to tweak, make your tweaks, then set an "overrides" option to use it instead. I know that's not a super satisfying answer, but that should help cover a lot of use cases

- For toolchains specifically, I have an idea in mind for this as well (also not implemented at all). At a high level, the idea is that packages could use "dynamic bindings", which you can then override for downstream recipes (this would require some new runtime features in Brioche itself). The toolchain itself would effectively be a dynamic binding, letting you pick a different recipe (so you could swap glibc for musl, or gcc for clang, etc). Cross-compilation would also be built on this same feature


Interesting thought. Maybe an LLM would build deeper insight with only one training language. On the other hand, the model might overfit with just one language -- maybe multilingual models generalize better?


I don't know much about the TKey, but it looks like they have some kind of remote attestation protocol available? (https://github.com/tillitis/tkey-verification/tree/main/cmd/...). That's usually how you avoid TOFU.

(1) the tillitis CA certifies your TKey device platform. You can now trust that it's running a specific firmware version with some platform pubkey.

(2) Your custom software is running and derives a keypair from it's derived secret + program binary hash.

(3) Somehow your custom software's pubkey gets locally certified by the platform's pubkey from (1). (not sure what this looks like w/ the TKey)

You now have a chain of trust from (1) the tillitis CA -> (3) the TKey device platform pubkey @ some specific firmware version -> (2) your custom software pubkey @ some specific version.

Now that we have a trusted pubkey for our service, I would open a secure channel to it via Noise IK or something (https://noiseexplorer.com/patterns/IK/). The TKey platform definitely looks a bit anemic so getting this working might be a challenge...


> I don't know much about the TKey, but it looks like they have some kind of remote attestation protocol available? (https://github.com/tillitis/tkey-verification/tree/main/cmd/...). That's usually how you avoid TOFU.

There is a tool to verify if the device is genuine by mechanism of a signature. You're outlining most of the process. The question is whether avoiding TOFU is the goal, right? I'm thinking, with the physical device in your hands and during first use, it's quite reasonable to establish the identity for your 'program' + 'user-secret'.

> You now have a chain of trust from (1) the tillitis CA -> (3) the TKey device platform pubkey @ some specific firmware version -> (2) your custom software pubkey @ some specific version.

This does mean you make this a global + centralized effort, right? (Also, it creates a dependency.)

> (3) Somehow your custom software's pubkey gets locally certified by the platform's pubkey from (1). (not sure what this looks like w/ the TKey)

With the specific firmware version: this requires a (possibly centralized) certification-process if only for a keypair, or qualification effort (if any) for the program?

To conclude: I am not convinced yet that TOFU is necessarily a bad thing. However, I do appreciate some ability to authenticate over many uses / longer stretches of time. (Hence the key-exchange + authn.) It seems there is a trade-off here, TOFU can be eradicated but requires other properties/effort. OTOH, the program-specific secret makes for a very strict form of trust. I'll take your comments into consideration, but it seems whichever way one chooses, there is a trade-off to be made.


This looks pretty neat! I especially like how well it composes with other tools.

Wonder how well it compares with fastmod [0]? That's what I've been using for large scale codemods/refactors. ripgrep is ofc insanely fast so ripgrep+ren would probably fare favorably.

[0]: https://github.com/facebookincubator/fastmod/


sccache only caches if builds are run from the same absolute path, so indeed different home dirs won't work


As a bystander, what would the reasoning be for doing this? I would have assumed that they'd hash each file and use that as a key in a lookup table.


In some languages, symbols are provided which evaluate to a file’s path or directory parent, so program behavior can vary even for the same content hash. That’s just one way paths can bleed in to violate hermeticity/correctness.


It sounds like somebody assuming a docker build, where everybody’s build will use the same file path. It’s still a very silly restriction, because not everything occurs within docker.


It’s an unfortunate safety tradeoff to guarantee consistency. Better visibility into program behavior could fix it.


They're listed separately as West Bank and Gaza Strip


A related example out in the wild:

Rust's `cargo bench` "winzorizes" the benchmark samples before computing summary statistics (incl. the mean).

https://github.com/rust-lang/rust/blob/master/library/test/s...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: