Hacker News new | past | comments | ask | show | jobs | submit | DvdGiessen's comments login

When I switched to daily driving Linux I created a VFIO Windows VM that gets passed through my GPU and USB controller (and thus audio, inputs) which gives me pretty near native performance. It takes maybe 10 seconds to boot into it, and I can easily access my files from the Linux host running underneath. I recently added a VirtioFS mount so I can store my games on the Linux filesystem instead of inside the VM disk image. I've started a few games and benchmarks on it to confirm it runs great.

VM running using libvirt and virt-manager, using QEMU underneath, with custom hook script that makes passing through the hardware a bit more seamless.

Although with how awesome Wine/Proton and ecosystem are these days I have so far played almost all my games on Linux. I created the VM setup because I thought I'd need it, but turns out I didn't really. Think I've played through 20-30 games or something like that now with minimal issues on Linux, including big budget AAA games within a few days of their release, smaller indie games, all kinds of different ones. Most tinkering required was for older games that'd need similar tweaks on modern Windows as well.


I assume no based on your comment, but have you ever run into issues where your vfio setup didn’t work with AAA games with intense anti cheat? Like cs:go/cs2 or valorant? That’s what’s always held me back


Not all, but many, of these can be avoided with edits to the libvirt XML. Some things just rely on seeing hyper-v extensions. Others use more indicators.

Since you mention CS - I had ESEA working and they're notably difficult to bypass. Valorant was the only game I couldn't figure out that I had tried. Destiny was fine.

Past tense because I've chosen liberty instead - not playing anything with invasive AC, dropping Windows, and running Linux full time


I haven't so far. CS:GO and now CS2 run natively on Linux, so never played those online in the VM. I hear Valorant's anticheat is trouble, but I haven't played it (and probably won't for that reason; personally I have so many other games left to play I'd probably just do that instead of having to install Windows on a separate boot drive).


One I remember is [0], mostly because of the excellent accompanying blogpost about how it works [1].

[0]: https://piratehearts.itch.io/supercrt [1]: https://www.gamedeveloper.com/programming/crt-simulation-in-...


In production on SmartOS (illumos) servers running applications and VM's, on TrueNAS and plain FreeBSD for various storage and backups, and on a few Linux-based workstations. Using mirrors and raidz2 depending on the needs of the machines.

We've successfully survived numerous disk failures (a broken batch of HDD's giving all kinds of small read errors, an SSD that completely failed and disappeared, etc), and were in most cases able to replace them without a second of downtime (would have been all cases if not for disks placed in hard-to-reach places, now only a few minutes downtime to physically swap the disk).

Snapshots work perfectly as well. Systems are set up to automatically make snapshots using [1], on boot, on a timer, and right before potentially dangerous operations such as package manager commands as well. I've rolled back after botched OS updates without problems; after a reboot the machine was back in it's old state. Also rolled back a live system a few times after a broken package update, restoring the filesystem state without any issues. Easily accessing old versions of a file is an added bonus which has been helpful a few times.

Send/receive is ideal for backups. We are able to send snapshots between machines, even across different OSes, without issues. We've also moved entire pools from one OS to another without problems.

Knowing we have automatic snapshots and external backups configured also allows me to be very liberal with giving root access to inexperienced people to various (non-critical) machines, knowing that if anything breaks it will always be easy to roll back, and encouraging them to learn by experimenting a bit, to the point where we can even diff between snapshots to inspect what changed and learn from that.

Biggest gotchas so far have been on my personal Arch Linux setup, where the out-of-tree nature of ZFS has caused some issues like a incompatible kernel being installed, the ZFS module failing to compile, and my workstation subsequently being unable to boot. But even that was solved by my entire system running on ZFS: a single rollback from my bootloader [2] and all was back the way it was before.

Having good tooling set up definitely helped a lot. My monkey brain has the tendency to think "surely I got it right this time, so no need to make a snapshot before trying out X!", especially when experimenting on my own workstation. Automating snapshots using a systemd timer and hooks added to my package manager saved me a number of times.

[1]: https://github.com/psy0rz/zfs_autobackup [2]: https://zfsbootmenu.org/


> Systems are set up to automatically make snapshots

I do that with sqlite to keep a selection of snapshots from the last hours, days etc.

https://github.com/csdvrx/zfs-autosnapshot


A workaround that works locally (and only locally) is to build zfs-dkms (AUR) yourself and modify it so it doesn't break on these GPL-only symbols. While the licenses would forbid distributing it as such, you can do so on your own machine just fine.

And if your root is on ZFS, make a snapshot before updating! I set up a pacman hook[0] which runs zfs_autobackup[1] which automatically manages snapshots, so I can always easily roll back to a non-broken state. The ZFSBootMenu[2] bootloader makes that extremely fast without even needing a bootable USB-drive. :)

[0]: https://wiki.archlinux.org/title/Pacman#Hooks [1]: https://github.com/psy0rz/zfs_autobackup [2]: https://zfsbootmenu.org/


See also the recent talk the author gave at FOSDEM last month[1] for an introduction to the tool and its use cases.

[1]: https://fosdem.org/2023/schedule/event/bintools_poke/


IMO one of the most humorous talks at the event! :)


Actually, you can use it on Android, but it does require using Firefox Nightly[1].

That does come with downsides such as Nightly being less stable and installing extensions requiring some extra setup, so it isn't the best choice for everyone. But I myself am using Stylus (which works perfectly fine) and some other not officially supported WebExtensions on mobile this way and can recommend it if you don't mind the downsides.

[1]: https://blog.mozilla.org/addons/2020/09/29/expanded-extensio...


As a long time user of Sandboxie, I'm excited to see this announcement and am looking forward to the open source release and what the community might be able to do with it.

Sandboxie's technology works extremely well for securely isolating all kinds of interactive Windows GUI apps, and might thus be be an interesting alternative to Microsoft's own Windows Container technology which is more focussed on servers and can't really do GUI's.

I'd love to see some experiments using Sandboxie sandboxes as Docker-style images/containers. Packaging a complete GUI app including dependencies and making it easy to run on another Windows machine without polluting it, without noticeable overhead, neatly integrating like you'd expect of a Windows app with things like window management or the clipboard, and all that while being securely isolated from the rest of the machine.


In high school I used VMWare ThinApp to portably run windows applications without admin privileges, I think it worked in a similar way.


I miss ThinApp for making portable apps


As it turns out, your post triggered this crappy AI's reply_with_obligatory_xkcd() subroutine.

https://xkcd.com/876/


You could look into streaming parsers such as Oboe.js, which specifically support the use case of parsing JSON tree's larger than the available RAM[0]. Then again, when you're loading such huge JSON files into a 32-bit instance of Chrome, it is likely you should look for a totally different solution to your problem.

[0]: http://oboejs.com/examples#loading-json-trees-larger-than-th...


I like the idea of sharing clean URL's. Actually, I use the Pure URL add-on[0] for Firefox which also removes garbage such as e-mail campaign tracking values, both for the purpose of sharing and since I prefer seeing/visiting clean URL's myself as well.

The Chrome implementation using the canonical URL has some edge cases which may be useful to handle, like adding the fragment which isn't part of the canonical URL for the resource yet can be useful to share. Given that this change is only for the share dialog and not for copying from the URL bar, that doesn't seem like a huge problem though.

[0]: https://addons.mozilla.org/en-US/firefox/addon/pure-url/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: