>For my filesystem I chose UFS over ZFS. Partially because I wasn't going to use any of the features of ZFS as I find them mostly useful only in servers but mostly because the ZFS option wanted to wipe my entire drive and I have 4 other operating systems on here that I'd rather not lose (Arch, NixOS, macOS, and Haiku if you're curious).
Pleasantly surprised this was addressed. My first thought after the mention of doing the install on a primary machine was "so did you just wipe all your data or what?". Seems there's some funky multiboot setup.
>Now this would be the point where I'd install i3 or openbox or something, but I noticed something in the handbook that caught my attention, Chapter 6: Wayland. I knew I had to use Wayland because fuck X11.
I think I would've felt the same here.
Fun read overall. I've never really wanted to run FreeBSD before, but it seems a bit more likely I'd try it (on a secondary machine) after reading this.
The reason Netflix content cache appliances use one UFS file system per drive instead of ZFS (or any other RAID / volume management) is that they run at the edge of what the hardware can do and can tolerate failures up to and including data corruption (to a point). It's a distributed cache and the video container formats have their own checksums.
They are running close to the theoretical peak per socket memory and I/O bandwidth and UFS allows them to use non-blocking async sendfile() with hardware offloaded in-kernel TLS (iirc also with hardware packet pacing). Their webserver gets an HTTPS request, validates the request and starts streaming the data from a file on a UFS file system. Sendfile() on UFS can DMA directly from the buffer cache to the NIC. If there is valid buffer for the file range the NVMe SSDs can DMA to main memory and the NIC can DMA from main memory without ever having the bulk data go through the CPU. The TLS handshake is done the usual way on TCP sockets before the session keys are registered with the kernel allowing the zero-copy TLS send (and receive). The bulk encryption is also offloaded from the CPU to the ≥100Gbps NICs. The mbuf chains handed to the NIC driver contains the key material and ciphersuite and references to offsets in pages of the buffer cache. The FreeBSD base system OpenSSL as well as the default version of the OpenSSL port are built with support for in-kernel TLS.
ZFS has to perform multiple data copies to implement verifying reads and has it's own file system cache (the ZFS ARC) while UFS is tightly integrated with the kernel virtual memory subsystem. This means that ZFS has more overhead, but it can also do useful optimisations like splitting variable sized file system block of up to 1MiB into scatter/gather lists of small allocations that can be decompressed as needed which is often more useful than a faster memory-mapped or DMA access. For example ZFS compression allows me to fit four to five times as large PostgreSQL databases into main memory, because the databases contain lots of sorted data which compresses really well. Even the fasted disk I/O path can't beat no disk I/O at all. The CPU cycles to LZ4 decompress the data are cheaper than going from 128GiB to >512GiB per server.
It's impressive how far they went to push as many bits per seconds as possible with as little hardware as possible, but Netflix did it by accepting trade-offs that won't work most others e.g. if a disk fails they expect to lose its content and slowly redistribute the lost data from an other replica if was accessed frequently enough to be worth it. Their caching appliances are just running degraded until they're either too degraded to be useful or it's convenient to service/replace them since they're colocated all over the world. The redundancy is implemented at a higher level.
What makes Netflix special is that they run their own FreeBSD version that closly tracks the FreeBSD -current development branch and keep their local patches to a minimum by upstreaming their changes. By upstreaming their changes they don't have to maintain an ever growing patchset and the FreeBSD project gets valuable feedback on performance regressions and hard to reproduce bugs.
OpenBSD does wifi better without running a Linux VM, and setting up a desktop it's easier. No ZFS, and no Linuxemu, but you get another set of features, such as pledge/unveil, a much easier and safer upgrade path, more up to date Intel GPU drives, and stuff laptop like brightness keys working rightly from the kernel boot and not with a crude desktop daemon hack.
> This time however has been different. I've been using it basically full time for around 3 days on my laptop and I don't see myself stopping any time soon. So what changed? What's making it easier for me to use? Why didn't I give up? Why am I writing this when I have to be up in 6 hours?
It's not the FreeBSD, it's just that guy got matured. One more year and he deserves being FreeBSD user. Two years more and he will bless it. ;)
WiFi is no fault of BSD. Vendors have only recently been acknowledging Linux and are finally pushing Linux blobs. However if vendors don't push FreeBSD binary blobs, what are you suppose to do?
If this was five years ago, WiFi was ghetto on Linux too.
> If this was five years ago, WiFi was ghetto on Linux too.
That's true of many wifi vendors such as Qualcomm, but Intel wifi (used by the post/author) worked just fine on Linux five years or even almost a decade ago in my experience.
I once worked on a site that was aimed only at Mac users, and this was back in the pre-Chrome days when Safari was the browser that was really pushing the web forward with stuff like Canvas and CSS animations, so we made maximum use out of every bleeding-edge feature.
One day I figured I'd test it in IE just to see how broken it was (I can't remember if IE even had support for transparent PNG yet) and when I tested it, our site would literally crash IE. As massive Mac fanboys, we decided this was a feature.
Pleasantly surprised this was addressed. My first thought after the mention of doing the install on a primary machine was "so did you just wipe all your data or what?". Seems there's some funky multiboot setup.
>Now this would be the point where I'd install i3 or openbox or something, but I noticed something in the handbook that caught my attention, Chapter 6: Wayland. I knew I had to use Wayland because fuck X11.
I think I would've felt the same here.
Fun read overall. I've never really wanted to run FreeBSD before, but it seems a bit more likely I'd try it (on a secondary machine) after reading this.