Hacker News new | past | comments | ask | show | jobs | submit | cpuguy83's comments login

If you import net or os then you'll automatically link to libc unless you disable cgo or set the "netgo" and "osusergo" build tag respectively.


Author here.

You are confusing the `go` toolchain and the binaries generated with Go toolchain.

The `go` toolchain itself is compiled statically, this is why this works without issues even in NixOS. Binaries built with the toolchain may or may not be linked against libc depending if you enable or disable CGO. But this is not related to the toolchain, that will work regardless since the toolchain itself is statically compiled.

Edit: or in other words, the `go` toolchain is built with CGO disabled.


... Or do some other flags to statically link to cgo.


Somewhat fresh kitty user (a few weeks now). What's your ssh issue?

Also saw Wezterm pop up last week and definitely decided I'll give it a go here soon.


The first two top results:

https://sw.kovidgoyal.net/kitty/faq/#i-get-errors-about-the-...

https://github.com/kovidgoyal/kitty/discussions/3873

Basically it is how it presents itself as xterm-kitty and then how to make others recognize that. Installing terminfo on the remote is an option, lying and claim it to be xterm-256color is another (with caveats.)

As in other issues with kitty, the author is probably right. But for many people that they have many different kinds of remotes to log into where they don’t have control over, getting around with these issues is tedious.

Making kitty works is possible, and I learnt how to deal with these issues one by one, but hearing comments here on HN points me to wezterm and once I migrated I never looked back. For the simple stuffs wezterm gets out of the way. And there’s some interesting advanced stuffs to pick up, such as multiplexing with ssh: https://wezfurlong.org/wezterm/multiplexing.html?h=ssh#ssh-d...


You could wrap it in another struct and use a custom MarshalJSON implementation.


Anecdotally and absolutely not production experience here, but I've had a Synology device running btrfs for 7 or 8 years now. Only issue I ever had is when I shipped it cross country with the drives in it, but was able to recover just fine.

This includes plenty of random power losses.


They do use btrfs. However, Synology also uses some additional tools on top of btrfs. From what I remember (could be wrong about the precise details), they actually run mdadm on top of btrfs, and use mdadm in order to get the erasure coding and possibly the cache NVME disk too. (By erasure coding, I mean RAID 5/6, or SHR, which are still unstable generally in BTRFS).


I assume you mean running btrfs on top of md (mdadm) or dm (dmraid), not the other way around?


Woops, you are correct! And it looks like it is dmraid, not mdadm.

https://daltondur.st/syno_btrfs_1/

Sorry about that!


fs corruption due to power loss happens on ext4 because the default settings only journal metadata for performance. I guess if everything is on batteries all the time this is fine, intolerable on systems without battery.


The FS should not be corrupted, only the contents of files that were written around the time of the power loss. Risking only file contents and not the FS itself is a tradeoff between performance and safety where you only get half of each. You can set it to full performance or full safety mode if you prefer.


True this is file corruption.


This is what buildkit is. Granted go has the only sdk I know of, but the api is purely protobuf and highly extensible.


For that matter, dagger (dagger.io) provides an sdk in multiple languages and gives you the full power (and then some extra on top) of buildkit.


sha256 has been around a long time and is highly compatible.

blake3 support has been proposed both in the OCI spec and in the runtimes, which at least for runtimes I expect to happen soon.

I tend to think gzip is the bigger problem, though.


> sha256 has been around a long time and is highly compatible.

Sure, and one can construct a perfectly nice tree hash from SHA256. (AWS Glacier did this, but their construction should not be emulated.)


But then every single client needs to support this. sha256 support is already ubiquitous.


Every single client already had to implement enough of the OCI distribution spec to be able to parse and download OCI images. Implementing a more appropriate hash, which could be done using SHA-256 as a primitive, would have been a rather small complication. A better compression algorithm (zstd?) is far more complex.


I don't think we can compare reading json to writing a bespoke, secure hashing algorithm across a broad set of languages.


Reading JSON that contains a sort of hash tree already. It’s a simple format that contains a mess of hashes that need verifying over certain files.

Adding a rule that you hash the files in question in, say, 1 MiB chunks and hash the resulting hashes (and maybe that’s it, or maybe you add another level) is maybe 10 lines of code in any high level language.


Note that secure tree hashing requires a distinguisher between the leaves and the parents (to avoid collisions) and ideally another distinguisher between the root and everything else (to avoid extensions). Surprisingly few bespoke tree hashes in the wild get this right.


This is why I said that Glacier’s hash should not be emulated.

FWIW, using a (root hash, data length) pair hides many sins, although I haven’t formally proven this. And I don’t think that extension attacks are very relevant to the OCI use case.


It's complicated. If you are using the containerd backed image store (opt-in still) OR if you push with "build --push" then yes.

The default storage backend does not keep compressed layers, so those need to be recreated and digested on push.

With the new store all that stuff is kept and reused.


Given dreams I've had, I'd say I'm the most creative while sleeping.


Especially with the loud noise suppression.


For me it’s the opposite. They have a “transparency” feature that works lets through ambient noise.

I often wear one set to transparency when I’m alone and have a podcast going or something. Ideal for something like a grocery store but still leaves me with full awareness. They also detect if I start speaking and automatically pause whatever is playing.


Loud noise suppression works with transparency mode. You might be thinking of the similarly named noise cancellation mode which works opposite to transparency.

Loud noise suppression does a temporary switchover when a loud noise happens to try and protect your hearing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: