Jokes apart, Zig is moving forward a lot which is why it's not 1.0 yet, but it doesn't mean you can't write safe and performant applications right now.
Zig is also a rather simple and straightforward language (like C) and has powerful compile-time code generation (like C macros, but without the awful preprocessor).
I'm more worried about compilation or stdlib bugs. In theory you can do lots of things with lots of things, but in practice there are all sorts of hidden limitations and bugs that tend to be noticed once a software product is past 1.0 and has been out in the wild for half a decade or more.
Because Windows does not have a good SSH implementation and PuTTY has always worked extremely well for me as a serial and SSH terminal (also, it starts up instantly and never crashed on me).
I like having a library of hosts to choose and maybe multiple tabs in one place, and although there are some slightly less cumbersome PuTTY frontends like KiTTY (please keep your expectations very very low), I'll rather use WinSCP (no quantum leap in usability either). Edit: to those suggesting W10 command line - yes it's there and works, but it's just that, a command line, not much help when you have dozens of servers.
I do like 99% of the time and in quite specific cases from the host machine (Windows native openssh) - mainly due to my environment is in WSL in terms of dotfiles, cmd line prompt, shell history and so on.
Genuine question: what do you consider "trusted" code/apps? What difference is there between compiling from source and using the prebuilt official Docker image?
Please note that Docker and LXC (and LXD by extension) are essentially the same technology packaged differently, with different yet overlapping use cases; in fact, early Docker Engine was based on LXC before they switched to containerd.
Therefore I don't think there is anything that can run on LXC/LXD but not on Docker (and vice versa); it's more a matter of preference, whether you want a long-lived persistent virtual system (LXC/LXD) or ephemeral single-application containers with optional persistence (Docker).
Still, nothing stops you from using LXC/LXD for ephemeral containers and Docker for long-lived systems, but you'd be using the non-optimal tool.
This summarizes it well. Sometimes you run into apps that don’t ship Docker containers because they expect to not be behind a reverse proxy. For example, while there are third party Docker containers for the Unifi controller, they only support using their apt repos. The controller expects to be able to send low-level discovery packets and not just HTTP. Lxd makes that really easy.
Also, from a development standpoint sometimes a long-lived container environment is easier. I run zigbee2mqtt inside of lxd because if I want to try a PR, it’s `git checkout … && npm ci` and not building a whole container each time.
For the home NAS and server case, I’ve been really happy with Ubuntu server with zfs + lxd + docker. And, a lxd VM for Home Assistant OS. Basically, the right tool for each job and no worries trying to force software into an environment their developers don’t expect.
I have yet to find an use case for WSL, when you can't interact with hardware devices and working with the filesystem has its (performance) limitations.
Docker in WSL2? Yeah sure, but I'd rather install a VM or connect to a remote Docker host.
MinGW/MSYS2 have been around far longer and have none of those issues... Then again, MSYS2 does not have a Linux kernel running under it but that shouldn't be a problem unless you want to run containers.
I love WSL because it makes Windows just as nice to use for development as Linux / macOS. No weird janky "this is buggy on Windows" stuff, because as far as the app is concerned it's just a Linux VM.
I've never looked back since WSL came out...I'm not promoting Windows but growing up in a corp environment, Windows is where my comfort zone was...I tried switching to Textmate on Mac back in 2007 but really hated not having a direct map of various shortcut commands using CTRL key. The closest I got to this Win experience was on Linux Mint using XFCE...but again, it was pain anytime I was sent MS Office docs. Sublime/WSL has been my goto since it dropped! Slowly getting used to VSCode as well.
I'm not sure why you're being down voted. This is my exact use case as well. For what I do (front end development) it's incredibly nice to have a Linux command line for most things. I'm stuck on Windows due to legacy .NET Framework apps, so when I have to dip back into Windows I can
>With WSL2 I got rid of VM's for local development.
No you didn't, you just let Windows hide them from you. WSL2 is essentially a Hyper-V VM running Linux with some programs bridging between it and the Windows host -- or two such VMs with some additional bridging (for Wayland and audio) if WSLg is enabled.
100% of the time if you are wsl user, you have it running.
On serious side, even `findstr` is enough, but goal was to show interop in action, not to choose best grep alike solution.
On other serious side, I can ensure and enforce standardization on having WSL across the teams and departments and be sure such snippets work. Standardizing on ripgrep probably doesn't worth it.
For lonely/solo/indi technician that makes no sense and ripgrep can be better of course.
I see it in similar way to Docker extended and made it _easy_ for _end users_ to user namespaces, overlayfs, ports forwarding and distribution (docker hub), WSL/WSL2 on top of Linux solves problem of other aspects, one of them being corporate friendly (among others).
The same way of when JS dev uses docker and not giving a shit it's the whole Unix compatibile system under the hood (Linux), he just cares on getting Nginx-something from Docker Hub, effectively making Docker to be his platform of infrastructure, the same way small fraction of the crowd cares WSL2 is using Hyper-V under the hood.
If you make a poll on what kind of virtualiziation is needed to run WSL2, I bet the most used answer would be "virtua what?"
It’s awkward when used to text pipelines, but the object oriented output is just so much nicer to parse and interact with for me that after spending some serious time with Powershell I am a huge fan. Being able to use the CLR directly and any .NET packages is killer.
> > the filesystem has its (performance) limitations
> It's a normal Linux filesystem, there's no difference in performance vs. a full-fledged Linux VM. (which it is anyway)
> Are you referring to WSL1?
WSL2 just exchanged the performance problems with the Linux guest environment's native filesystem for performance problems when the Linux guest accesses Windows' filesystem. Those are still there and they're still atrocious.
I've done A LOT of performance tests of running ML tasks (via Docker) on WSL vs raw Ubuntu on the same hardware. The performance of WSL is about the same as native - some tasks are actually faster, some slower -- but net out to be the same. These tasks include: compiling a Docker image (1 hour), deinterlacing video with a NN, Open Whisper, etc.
Filesystem things are faster in WSL+Docker than native Windows for me. At least npm stuff. I'd rather skip the hassle of VM, and why use a remote Docker host? And same with MinGW. I like being able to just do stuff out of the box as if it was an Ubuntu install. Much more ergonomic, and can just follow whatever official guide for the tool I'm trying to use, without depending on someone making it possible to use in MinGW/MSYS.
So I kinda disagree, for all things you mention I prefer WSL.
Same here, Docker for windows is slow, like VERY slow.
After I moved Docker to WSL2 my app is starting and running at least 10 times faster.
The difference is so huge that I wonder why it is even possible to use Docker natively on Windows file system. It's crazy.
I'm talking 60-90 seconds to bootstrap and run each page each time under Windows filesystem to around 5s (yes the app is very slow) under WSL2
Yeah, You need Hyper-V to run basic Docker configuration on Windows, I just simplified this too much I guess.
The end result is that if you run Docker on Windows with Hyper-V alone it's using Window filesystem and then "translating" this for Docker therefore making the whole process of accessing the files incredibly slow.
I used to work with docker using Docker Kinematic (https://github.com/docker/kitematic) in the past, which basically was a Virtual box with Docker installed inside this VM, and since the VM was some kind of Linux, it worked reasonably well.
The same seems to be the case when using WSL2 - you run VM and Docker inside this VM, removing the file sync/translation part out of the equation, resulting in speed boots.
> The same seems to be the case when using WSL2 - you run VM and Docker inside this VM, removing the file sync/translation part out of the equation, resulting in speed boots.
I'm sure that this is true for most people out there, but for some random reason, WSL2 actually ran much slower for me than Hyper-V. It was one of those boring Laravel projects where you mount the application source into a container directory so PHP can execute them and serve you the results through Nginx with PHP-FPM or whatever you want, as opposed to just running something like a database or a statically compiled application.
Except that it was unusable with WSL2 as the Docker back end (regular Docker Desktop install), but much better with Hyper-V for the same goal. It was odd, because the experience of other devs on that exact same codebase was the opposite, WSL2 was faster for them. And then there was someone for whom WSL2 wouldn't work at all because of network connectivity issues.
I don't really have an explanation for this, merely an anecdote about how these abstractions can be odd, leaky or flawed at times, based on your hardware or other configuration details. That said, Windows is still okay as a desktop operating system (in the eyes of many folks out there) and I am still a proponent of containers, even if working with them directly inside of *nix as your development box is a more painless experience.
I was playing with Docker Desktop and Podman Desktop for the first time yesterday. Podman Desktop looks really good. I was impressed with how well it integrated with the docker engine I already had running, and it even had at least one feature I couldn't find in Docker Desktop (pulling an image from the GUI).
Its crazy, I've gone in circles between WSL, dual boot win/linux, VMs, everything on windows, remote linux servers, containers in Docker Desktop. Right now I'm back to just using Windows and ignoring Linux altogether, which seems to work best with the exception of Python.
Windows desktop with full linux servers via SSH is the sweet spot for me.
I don't EVER find that I need some kind of *nix command on my local machine when developing, but that's also because I'm not afraid of PowerShell / stuck on bash.
> don't EVER find that I need some kind of *nix command on my local machine when developing, but that's also because I'm not afraid of PowerShell / stuck on bash.
Have you sorted out how to get fuzzy tab completion in PowerShell, short of the buggy, menu-y integration available with fzf?
Do you find the everything you need is just already in PowerShell or so you sometimes run third party command line executables? If the latter, how do you deal with lack of man pages?
I use WSL2 for pytorch and company because the newer (WSL) kernel has CUDA support oob if you have the nvidia drivers installed on the Windows side. It’s actually extremely convenient.
Dug up my login for this site to thank you for this bit of info -- I'd been idly wanting to try some GPU-powered python scripts, and the setup on Windows made it unappealing. Now I can use 'em!
Earlier this year, I started a job where I had to use a Windows desktop for the first time in a little over a decade. For a while, I did as little in WSL as possible, and leaned hard on PowerShell, MSYS2, and Scoop. After a few months, I abandoned the 'native' tools in favor of WSL due to performance and compatibility issues. I wish MSYS2 were the solution, because at least then there'd be one (WSL sucks too).
I am still on WSL1 due to the filesystem performance with WSL2. I recently tried to move more of my workflow towards MSYS2 but various things keep breaking for me without obvious reasons.
Latest issue I encountered was that GNU parallel simple does not work. [1]
What I'm excited about is the prospect of nontechnical people being able to use a GUI like Docker Desktop to install and manage apps in a nicely sandboxed environment. I think Docker is still too targeted towards developers, but Docker Desktop is already a huge step forward in usability for people who aren't comfortable on the command line.
Are you imagining something like Flatpak for Windows? Don't Microsoft Store apps have some sandboxing? What's the end goal with the non-developer use case you have in mind?
I am not a developer but I find it useful to just be able to use rsync and a few other tools I am used to from Mac and Linux. Same reason I install Homebrew on Mac.
I would consider Reddit a bit more than a `tiny internet forum`, even thought many topic-specific subreddits are way less useful than an actual forum would be...
The problem with mods is that some of them behave badly enough and go on power trips banning people for their own enjoyment and making tantrums, thus giving a bad rep to the whole category, including those who are actually good people with sane principles.
For sure, the point I'm making is that there's a multi party transaction here, with systemic complexity. Makes it hard to pin responsibility on just Cloudflare (or just the user or just the ISP, etc).
I'm not ignoring the context, I'm saying that it's irrelevant. Cloudflare made the choice to block real people based on factors outside of their control, and then to market that product as a panacea; they don't get to pass the buck, doubly so when they don't expose enough information to let other people fix the things they broke.
I can totally relate to this, even after working professionally with Angular for 3+ years and generally with JavaScript for way more than that.
I recently set up a quick JS project from scratch that needed a few different npm packages and ended up spending more time understanding the >three< different ways I had to use to import the different packages (because the returned errors were utterly useless), than actually writing the business logic...
Talking about low power, I am currently running a headless HP laptop which I recovered from a friend. This thing sports an i3-5005U CPU (2GHz 2c/4t) and idles at 4W measured at the wall!
Well, it's not powerful by definition but it can easily handle file sharing and torrenting (for my library of Linux ISOs obviously) while staying powered on 24/7. The fan also turns off at idle so it's totally silent.
What's funny is that this machine is currently resting on top of my decommissioned home server (Xeon E5-2697v2, 64GB RDIMMs, Supermicro X9) which idled around 100W...
Jokes apart, Zig is moving forward a lot which is why it's not 1.0 yet, but it doesn't mean you can't write safe and performant applications right now.
Zig is also a rather simple and straightforward language (like C) and has powerful compile-time code generation (like C macros, but without the awful preprocessor).