Hacker News new | past | comments | ask | show | jobs | submit login
Linux 5.6 (kernel.org)
488 points by SomeSnail on March 30, 2020 | hide | past | favorite | 146 comments



I'm just a kernel newbie looking at small issues in my free time, but seeing my name in the shortlog feels very rewarding.


Nice, what did you do and how much time did it take? I've been hesitating to contribute but it would be great to read a blog post of someone just starting to contribute to see what it takes! It might be enough to get over the fear.


I only did some small fixes for building the kernel with the clang compiler.

Probably the most confusing thing when getting started is finding something to work on. There's no real centralized issue tracker/TODO list/ideas list - discussion happens mostly in mailing lists [0][1] for specific topics. If you follow the discussion on YOUR_FAVORITE_TOPIC long enough you should be able to find easy work items, people able to help further, reviewers for your changes etc.

In my case specifically, I found the great people at https://clangbuiltlinux.github.io, which is how I got started.

[0] http://vger.kernel.org/vger-lists.html [1] https://lore.kernel.org/lists.html


Finding where to start was the precise issue I looked to solve by setting up that issue tracker.

And by being patient, leaving low hanging fruit, taking the time to file issues (explain, demonstrate, guide, enable), we've been able to help many people including yourself make their first contributions to Linux and LLVM.

I'm proud of you, Ilie!


Honestly, probably the hardest part of Open source ATM. Contributing to a project is, frankly, super intimidating. Doubly so when it is something as important as the Kernel.


I agree. Finding where to start/how to help, and fear of jumping in/criticism are biggest barriers to open source contribution. Also, time/family constraints and lack of mentors.

Sink or swim!


Thank you for the help!


As an open source maintainer, I can tell you that we appreciate the small fixes and updates that come from people just getting their feet wet.

Even little things like spelling mistakes, or fixing a bug in an infrequently used command line option go a long way towards a better overall product.


As an open source user and (in the past, contributor), I wholeheartedly agree with that. The fact that a software has no bugs is what makes it a product I can rely on.

For example, I'm a KDE user since 20 years. And although it has come a long way, there's always this little bug here or there. Because of that , I can't recommend it to some non IT user who's going to use it alone, I can't 100% trust it.

So OP is right : fixing little bugs makes a incredible difference.

(as far as KDE is concerned, kudo's to Nate Graham who's been looking at small issues for some times now and that really makes a difference)


If you cannot recommend KDE because "there's always this little bug here or there", you cannot recommend anything.

When people prefer Windows over Linux, or macOS over Windows, or Linux over macOS, it's often because they don't want to exchange their well-known set of bugs for an exciting and unknown new set of bugs.


not as simple as that. For example, on debian buster, the Discover search bar doesn't work if used several times; the favorites in the start menu sometimes reappear after being deleted; when I quit kmail by using the windows close button, Kmail complains it exited in an unexpected way, etc. These are not little quirks, these are apparent in a normal daily usage. I use Windows at work and it's past those issues.

But yet, I still prefer KDE because I prefer the whole myth of its creation (I say it's a myth because 1/ there's not much of a story 2/ even if it existed I'm 100% sure I didn't read it). I also prefer it over Windows and OSX 'cos it has things which are better : start menu is very clean, konsole works great, okular is super good, K3B just works,...

So I'm not criticizing the work done by KDE, it's just the end result that is 98% good (which for me, as an experimented dev/user means 101%), not 99.9%.

And while I'm at it, installing the next stable debian is always such an amazement to me. Things improve so much in stability and in those little bugs here and there. Having been a GPL/linux/... follower since almost 30 years, I can measure that. And it's soooooo great now... Can't believe that I've been able to see the whole thing happening. And now the pinephone thing is coming along...

(that was the emotional minute, thx for your attention :-) )


I had to login to my windows once. I had to install itunes to backup my phone.

The button to install itunes was not working. It threw a very generic error message. And that seemed to be the only way to install itunes (Apple's webpages pointed to Windows store).

Took me hours to find and install an alternative third party itunes installer from random untrusted sources.

Windows, quality-wise, is simply unbearable for me in terms of usability, reliability, performance and UX.

Disclaimer: I have had small contributions to KDE.


Oh yeah, by the way, disclosure: Me too. :)


That's pretty awesome I actually remember you, you had contributions on KDE Games. I used to maintain some of the KDE websites including games.kde.org when I was young.


"For Every Bloke Who Makes His Mark, There's Half A Dozen Waiting To Rub It Out" -- Andy Capp

Congrats on being in the former group. When (not if) apes from the latter come to mess you around, tell them to get stuffed.

(edit: typo)


> in my free time

Thank you! To make it even more rewarding for you :) - I remember there was a report mentioning that most of the work on Linux kernel nowadays is done by salaried employees.


> most of the work on Linux kernel nowadays is done by salaried employees.

That's been the case for decades, probably since just after Linus released it.


It really depends on how you measure the value of the contributions. Simply looking for lines of code (LOC)? Looking for LOC but without drivers?

But even if: is every LOC to be seen as equal contribution?


Lines of code are of negative value; adding more actively harms the project, all else being equal. Contributions are in terms of what features were puchased with that harm (and some features are harmful in and of themselves, but Linus is pretty good at rejecting those contributions, if not the not-worth-their-loc ones).


Why is that probably? What makes it probable?


Because every company that has libraries, products, stacks, customers using Linux sooner or later runs into issues in Linux, that they can't wait around for someone else to fix.

Big orgs have lots of people just working on Linux full time. They start out fixing one small thing, which then needs to be updated, optimized etc etc and then things snowball.


Because if you're a competent enough programmer to contribute to the Linux kernel, you're probably competent enough to get a job as a programmer.


I believe that by "salaried employees" it was meant as "people that have linux kernel programming" in the job description.


Yes, I think you're right. In that case, it certainly wasn't the case back when Linux started. Intel and Redhat are the biggest contributors according to the Linux Kernel Development Report from 2017:

  Intel 10,833 13.1%
  none 6,819 8.2%
  Red Hat 5,965 7.2%
So, "none", although it's the second biggest category, only accounts for 8.2% of contributions.


A lot of kernel developers hide who pays their bills.


Unless they also falsely declare the employers it should not be more than 8.2%


I see, and so you assume that that job would be the contribution.

When I contributed to the kernel (in a rather minor way, I hasten to add), I was competent enough to get a job as a programmer but my job had nothing to do with the kernel. As far as I can remember that was typical for the people who contributed during the first five years or so. I didn't pay much attention after that.


Didn't he mean people paid primarily to develop for Linux, rather than people just developing also developing for Linux?


I think a straight 'no' suffices. Linux most definetly wasn't a commercial project in the early years.


It should feel rewarding. You've contributed to and improved one of the most important pieces of software in the world!


Small issues are nonetheless important to many users all over the world, thank you for taking the time and using your knowledge.


Thank you for your hard work!


This made me smile for some reason, congrats and thanks!


Kudos, that's really cool! ;-)


Keep up the important work.


Congrats!


Well done!


If anyone else - like me - has tried updating the kernel this morning and figured out wifi with an intel card was broken: It seems they broke the iwlwifi driver [1] and here [2] is the patch to fix it.

[1] https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.... [2] https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.g...


Being worried about the current trajectory of macOS, I’ve been thinking more and more about Linux and imagining how far it must have come since I last used it in the early 2000s. Reading things like this really dampen my enthusiasm for trying Linux again.


Glitches like this are very infrequent if you run well chosen hardware. If you run e.g. an all-Intel setup which is a few months old, it's rare.

If you are afraid of things getting broken, simply use NixOS. All upgrades can be easily rolled back, and you can freeze updates of some packages, or get packages through different channels with different stability compromises. I prefer to get everything through rolling release channels, as then bugs come one by one.

I've been running Linux on a MacBook Air 11 2012 for years, and everything worked out of the box from day 1. For the record, that's a pure Intel machine with the exception of a Broadcom wireless card. Said card works equally bad in Linux and Mac, I don't get why Apple keeps sticking to this brand.

I was hoping to upgrade to one of the new 2020 Airs, which are again providing terrific value, a great screen and keyboard. Sadly, the T2 chip makes things a bit difficult. It's getting support for Linux, but there are still many glitches. My Mac was broken by some stupid inspector in my rental property, and it's sadly not so easy to find a decent replacement with all COVID restrictions.

Another option is the Surface line, which is surprisingly well supported. The Surface Go, for example, works incredibly well as a tablet. All recent Surface machines work quite well in Linux.

Else, your best bet is a ThinkPad or an XPS. Unfortunately, Chromebooks are no longer so easy to reflash.


The downside of NixOS is that developing on it is hell incarnate. And saying "you just have to learn the Nix way and use nix-shell" is not a satisfactory solution, take it from someone who has tried twice.

Moreover, I gave up on NixOS the second time around because an update completely hosed my system, including all my rollbacks. Something with XServer broke across all snapshots.


> Glitches like this are very infrequent if you run well chosen hardware.

1 point sample here but I never manage to get reliable WiFi on a NUC6i7KYK. Hardware that should be included in the “well known” category.


Sometimes even this fails, what can I say.

Ubuntu runs a certification program. I wish there was an independent party certifying hardware for Linux. And they looked into all the details, such as battery discharge triggering ACPI events, 802.11 functions supported by wireless card and things like that.


> Sometimes even this fails, what can I say.

Not blaming you :D


That's weird because I have the same HW, (Skylake Skull Canyon NUC) & get WiFi good enough for a home media server, (Arch).


They’re also infrequent if you hold off on updating immediately after a release. Just wait for distro/package managers to check things and release it to you!


I'm sorry this is the impression I created.

Ultimately we're talking about a .0 version here. That's not going to end up in a Linux distro without further testing.

My post was directed to people manually installing latest kernels. That's not what the average "I want a Linux Desktop" user should do. It's targetted for people who tend to play with their systems. It's what they expect.


It's pretty atypical to upgrade to the latest mainline linux kernel the day the kernel is released. Most distributions take time to pick up a new kernel and make it part of a major distribution release that is subject to more testing. The OP must be using a bleeding-edge rolling-release distro like Arch Linux or running a custom kernel. I wouldn't anticipate such problems if running a popular release based distro like Ubuntu, Fedora, Debian, etc.


Arch doesn't even have 5.6 in staging, let alone testing, let alone core. It was released less than 24 hours ago, that's barely enough time for the maintainer to test it on their own machines. No distribution releases a .0 kernel after 12 hours of testing to anybody except people who have explicitly opted into testing. and at that rate, there's a good chance you'll get -rc kernels too, and those are full of bugs (that's why they're -rc).


My sense is that this kind of thing is typical for newer versions of the Linux kernel. Most people don't run the latest kernel release, and for day-to-day applications, running an older, more stable version of the kernel won't effect anything.

I quit MacOS a little over 10 years ago now. I used to be a huge Mac fan, even subscribing to MacAddict magazine and frequenting the Mac Rumors website.

It was a cult and I have absolutely no regrets leaving. Especially now that Bitwig runs on Linux, what would I ever need a Mac for?

I'm using Arch Linux and it's the most stable operating system I've ever used. I've gone more than 4 years with the same installation. Thanks to the Arch wiki I now have working knowledge of things like systemd and how to compile kernel modules. These things really aren't nearly as scary as they sound. I didn't learn about them by reading the longwinded `man` pages... I learned about them from the Arch wiki!


> I'm using Arch Linux and it's the most stable operating system I've ever used.

lol

(from an arch user)


Okay the lol is warranted... but in a certain way I'm serious: Ubuntu and MacOS were always changing things for me whenever I upgraded. Arch doesn't make any systemwide configuration changes. If some package has backwards-incompatible changes, I deal with those one at a time, but there's none of this "oh we rearranged the entire organizational structure of your root directory surprise!!!" stuff.

And because I'm always getting the latest stable versions of things, bugs often disappear magically, rather than lingering on until the next major upgrade.

And yeah I mean you gotta follow some key rules, like never install anything without the `u` flag, etc.


When I started using Linux in the 90s, especially because I was young and enthusiastic, sure, I would grab tarballs from kernel.org and build them on the day of release.

That is simply not how it works anymore. Your distro vets the kernels. They patch them. They ensure that if that well known Intel wifi bug is still around that the patch is applied (note it already exists!) or better yet already in-tree. A late breaking bug of that nature is not a huge concern for your everyday Linux user.


As for me personally, since a few releases I've taken the vice of waiting on Sunday midnight for building the new kernel release and seeing what's new and what's broken :)

Judging from the bug reports, there's still quite a few people building the latest kernels - mostly from poweruser distros like Arch and Gentoo.

If you use Ubuntu, Debian or CentOS, then yes, you're going to get an old and boring^W^W^W^W^W^Wstable kernel. But there's still enough of us around on cutting edge, I guess.


> Reading things like this really dampen my enthusiasm for trying Linux again.

It shouldn't. If you use a major desktop distribution (ie, Ubuntu) it will be many months before you are actually using kernel 5.6.x.


I recently put Linux on a MacBook Air. It’s the same as it’s ever been - extremely finicky and time consuming to get all your stuff working smoothly. Requires scouring random bug reports, stack overflow etc to find all the little work arounds.

This is for what I consider critical stuff: network drivers, suspend/sleep, power management, display brightness etc.

My impression is that it is not any more or less of a mess than it was in the 2000s, or even in the late 90s.

All that being said, it’s still wonderful and gratifying once you have everything tuned to your liking.


What MacBook Air version?


Old!

It’s mid-2012 model. Core 2 cpu with Broadcom 43xx


If you're running Linus' master branch, yes, things are going to break from time to time.

Stick to LTS releases (which is what most distros use) and you'll be fine.


Most (99.9pc) Linux users get all their updates via their distro. Not saying you can't get a bad update, but Much less likely if you're a little patient.


Unless you're doing kernel development why would you be downloading and compiling the bleeding edge kernel on your desktop?



Pretty sure these are different issues.

The iwlwifi issue was introduced very recently with a security fix, as described by phoronix. I was running kernel 5.6-rc6 for some testing recently and it didn't have this issue.


You may already do this, but bugs like these are the reason I usually recommend having two kernels. The "oh crap did the latest release break X" is a lot easier to test if you can just reboot into the LTS kernel.


Isn't that basically what every distro does these days by keeping 2+ kernels and a rescue kernel around?


Is there somewhere I can read more about that?


I just got a thinkpad x1 carbon and have similar issues with the intel chipset for wifi and audio with newer kernels as well. Only thinkpad over ever had issues with .


I have the same notebook (2019 edition) with Arch linux; everything is fine except audio, which won't work (basically the mic). Since I run every week or second into trouble after upgrading, I already thought about changing the distribution to something more stable, like Ubuntu/Debian. But I doubt my mic will work there :-(


https://wiki.archlinux.org/index.php/Lenovo_ThinkPad_X1_Carb...

If this is yours (Thinkpad X1 Carbon Gen 7), there's a very recent fix that you can find there. It's a bit of a hassle, but should work across more than just Arch based distros


Probably is just a config issue: for some reason this morning the wrong mic was selected as default and the update may have played a role since I don't remember changing it myself, but a quick look at pavucontrol solved the problem. PS: I'm using Arch Linux on a Lenovo too, but not a thinkpad.


Unfortunately, it is not. I already tried various kernel modules to support (i.e. detect) the hardware. Apparently, there is a reason why Arch chose some specific compiler flags for the kernel. I don't really want to start building my own kernel (there's a reason I chose Arch, not Gentoo).


I ended up putting together a gist of all the stuff I had to do to get my gen 7 carbon x1 working: https://gist.github.com/dwaynepryce/5a3141dfcb3cb166cd54766f...


I'm also pretty disappointed with x1c7 Linux compatibility. I'm using Ubuntu 19.10 and it's not a smooth ride. I wish Lenovo had a Dell Sputnik like Linux support. Ubuntu 20.04 have been said to have better support.


I've been trying to convert people to various flavors of Linux for 20 years now and the usual showstopper is wifi not working at all, or at least without a ton more fiddling. Interesting to see that some things haven't changed.

Trying to be constructive here, might it be because automated testing can't easily test things that touch the "real world"? Or is it due to some sheer number of possible wifi hardware implementations? Or both? (I really wish the vast majority of wifi antennas were hooked up directly to a generic SDR at this point)


If you are pushing your new converts to install/update to bleeding edge kernel releases, you're really doing them a disservice and setting them up for failure.


Over the course of 20 years? I’ve seen this issue from attempting an old Red Hat mainstream distro on an old Thinkpad years ago to any one of a number of Ubuntu or other distro install attempts over the years


> Trying to be constructive here, might it be because automated testing can't easily test things that touch the "real world"?

The Linux kernel still has embarrassingly little automated testing, and what is done is largely done by third-parties.


That’s... actively stupid at this point


5.5: broken i915 5.6: broken iwlwifi 5.7: ...


Is i915 fixed?


It is for me on 5.6-rc7. Tried as both a module and built in.


Really hope that's the case for me as well. I've been experiencing unrecoverable gpu hangs on my i915 Skylake chip with recent kernels (5.2-5.5).


Yeap, same here. 5.5 was especially buggy for me, with frequent hangs while under heavy load. So far so good on 5.6.


Anyone can confirm this?


v5.5 is missing multiple urgent patches from v5.6-rc1:

https://gitlab.freedesktop.org/drm/intel/issues/1201


For what it's worth, I've been using 5.6 since early rc's in Fedora 32 pre-release for many weeks, and haven't had a problem with Intel Dual Band Wireless AC 8260. So not all Intel wireless is affected.

Fedora 32 looks to be on-schedule for an April 21 release, pending release blocking bugs getting fixed up by then; and it will be shipping with Linux 5.6.


Good to know. I've been using the rc releases for a bit without issue but will likely run into that immediately when upgrading.


I definitely appreciate how Linus mentioned his daughter identified him as a "social distancing champ" - many of us developers have been practising social distancing long before anyone heard of COVID-19 ;-)


Reminds me of this Penny Arcade comic with a similar sentiment about video game players: https://www.penny-arcade.com/comic/2020/03/25/bells


>many of us developers have been practising social distancing long before anyone heard of COVID-19 ;-)

I would even argue that we invented the internet to solve exactly that problem, keeping our distance from other meatbags :-)



Linux 5.6 is getting Wireguard, Linux 5.7 will be getting exfat. It's a great year for the kernel.


What is special about exfat? I see it’s a variant of FAT32 that allows for more than 4GB, and is mainly for creating USBs? Does this mean it will improve interop with Windows?


No, what's special about exfat is that it will improve interoperability with embedded devices like digital cameras. Many embedded devices which use SD cards or similar understand only FAT32 (for smaller cards) or exFAT (for larger cards).


ExFAT is the standard file system on SDXC cards. And it is patented. So it is both very relevant and encumbered.


Linux could already interop with Windows via ntfs-3g. EXFAT is great for interop with both Windows and macOS on the same filesystem, as well as interop with digital camera storage.

You should prefer a different filesystem if possible, though. EXFAT lacks features like journaling that protect against corruption. But EXFAT support is great for when you don't have a choice.


One special thing about exfat is that when it gets corrupted you are screwed.


In difference to ... ?


And adding to what guerrilla said, it seems to end up bad more easily than fat32, at least in my experience. I only have anecdata for this, but old files appearing as 0 bytes, filesystems that decide they won't mount again, things that I've only had and seen on other people's systems with exfat.


filesystems with redundant superblocks and journals? that is most modern ones I think


Most file systems use journaling[1], which allows you to recover from corruption. Some use copy-on-write[2]--most prominently zfs, but also btrfs and, more recently, bcachefs--which almost entirely prevents corruption in the first place.

1: https://en.wikipedia.org/wiki/Journaling_file_system

2: https://en.wikipedia.org/wiki/Copy-on-write#In_computer_stor...


I use it when I need to move large files on usb between linux, win and macos since ntfs support on macos is such a nightmare.


Yes, specially since it was contributed by Microsoft and Samsung.


It's the most advanced file system that's supported on all major desktop operating systems (Windows, Mac). Linux is only catching up here. So it's becoming the ideal file system for USB sticks. Even if you just care about Windows and Linux, NTFS is not a safe option to share data. I've had too many issues with the available drivers for Linux.

Even if you are just sharing between Linux computers, ext2 is no good option, because of uid/gid values being inconsistent between computers. ext2 wasn't made with this use case in mind.


Lots of consumer electronics already support it.


While announcing Linux 5.6 is relevant to HN, the particular mailing list message regards differences from a previous release candidate.

Can someone please explain, in layperson's terms, what has changed from Linux 5.5 (or 5.4) to 5.6?


One big thing is that from now on, Wireguard is included in the official Linux kernel.

https://lists.zx2c4.com/pipermail/wireguard/2020-March/00520...


https://kernelnewbies.org/LinuxChanges this wiki page currently covers 5.5 but should be updated to 5.6 in the coming days.


The version-specific pages are available in draft form before the main one gets updated: https://kernelnewbies.org/Linux_5.6


OK, this is the 2-page feature overview:

https://www.phoronix.com/scan.php?page=article&item=linux-56...

Enjoy.



The LWN.net announcement post (https://lwn.net/Articles/816213/) has an overview and links to a few more detailed summaries.


Feeling a noticeable speed improvement on my Thinkpad running Focal Fossa. Hearing variable fan sounds that I am not used to running on Linux as well. I feel like the fan has been much more active when running Ubuntu as opposed to Windows. Hopefully these new fan sounds hint at less fan usage long-term.

Once Fossa gets out of beta and some of the weird GNOME bugs clear up for that release, this will probably be the best Linux/Ubuntu performance in years.


"So we'll play it by ear and see what happens. It's not like the merge window is more important than your health, or the health of people around you."

Very nice.


Upgraded to a system with an X570 motherboard and an RX 5700 XT graphics card... for the most part, since Kernel 5.3, every release has been more stable and mostly better performing... though having to manually download drivers a couple times from the kernel git repo has been a pain, I really appreciate the effort.

-----

Rant ahead...

If only the front end could button up more... Gnome's blank screen + password seems to have a bug that I didn't have the time/ability to overcome and switched to kde/plasma, which imho is really disjointed as an experience in so many ways. I spent about 4 months mostly in Linux, until a few weeks ago, I had to switch back to windows.

As to why the switch back, I couldn't use a VM, as most of the stuff didn't yet support the ahead of LTS kernel I was using, and I needed that for hardware stability.

I will say the efforts of the Kernel team consistently impress me, and is a big part of why I target Linux and/or Docker for most of the work that I do. Docker+WSL2 has been nearly the best of both worlds to me. I still have Linux on another drive, so will probably give Elementary's UI a try next.


> As to why the switch back, I couldn't use a VM, as most of the stuff didn't yet support the ahead of LTS kernel I was using, and I needed that for hardware stability.

Did you know that Windows VMs work quite well in KVM and Gnome Boxes is a nice GUI to just install and run a Windows VM? Just asking, because for me it's the first time i'm not using VMWare or Virtualbox but still have the Windows VM for the occasional Windows-only software (but honestly, not like i do lots of stuff within Windows).


Unfortunately AMD video cards don't really support PCI passthrough- I'm in the same boat as OP with an RX 5700. The issue has been present for several years, and is somewhat fixed by community patches, which makes me suspect that this is being intentionally overlooked by AMD so as to not allow their consumer video cards to be used in the data center.


I used the RX 5700 XT with the community patch for months in a PCI Pass-through setup and had zero issues with the reset bug. It's definitely an option for consumers if you ask me.


And when I tried to install KVM, it errored because the kernel I was using is unsupported and it didn't install, which is specifically what I pointed out... same for VirtualBox... I didn't go VMWare as I don't want to outlay the cost of VMWare Workstation.


VMWare Player is free for non-commercial use. https://my.vmware.com/en/web/vmware/free#desktop_end_user_co...


You cannot create a VM in VMWare Player. And since this would be for work projects, it wouldn't be non-commercial use.


Unless you mean something different... yes you can. I'm using it right now.


I didn't know that i have to "install" KVM.. isn't it Linux Kernel based and shouldn't even have the problem?


You should perhaps try other desktop environments/windows managers. The linux desktop world definitely does not simply end beyond Gnome and KDE. And very few people seem to be enjoying Gnome much at all, it just seems like a poorly designed environment.

Nowadays I feel like throwing up at the thought of having to switch back to Windows, even in terms of desktop environment and interface. There are just so many more things one can do with Linux, ways to set up everything the way I want and weird Windows quirks to have to deal with.

Especially the amount of extensibility and combinations are amazing. If I like a certain program or widget from one desktop environment, the chances are that I can just run that program while running the other desktop environment, thus combining the total experience into the one that I want. Not every single program can be freely combined of course, but many of them.


I understand this... however, I have limited hours in the day and actual work that pays me to get done. I have limited time as a result to experiment and try any number of desktop environments. I also have limited projects that require windows, and because I needed an ahead of mainline kernel for my hardware, the VM software options wouldn't install. Again, limited time.

I'd much rather not be using windows... this time around, I tried WSL2 and Docker now runs in it as an option. I am surprised it works as well as it does. This means, practically speaking, I can use VS Code against my linux kernel/environment running under windows to do my actual work... Working in Linux, running in Windows.

I also specifically stated I would likely return and give other linux desktop environments a chance. I have two nvme drives in my system, one has windows, the other linux. I didn't even run the windows environment after install for 4+ months, until I needed windows and couldn't get a VM working.

----

I have run ElementaryOS in the past and really liked it, and will probably give Pantheon a try next, without a full reinstall... It's not as good as MacOS' UI/UX, but iirc was pretty nice to use in practice.


> And very few people seem to be enjoying Gnome much at all,

Happy people don't comment as often as people who have complaints.

I personally have used GNOME for a decade and a half, and it continues to Just Work for me, and get better with each release.

I'm not suggesting that nobody has issues with it, but the idea that nobody likes it is certainly untrue. There's a reason it's the most popular default environment installed by distributions.


Not GNOME 3 right? It is the old GNOME that has "just worked", and I was talking about the new one which everyone hates despite Ubuntu trying to push people into using it.


(You might consider approaching things with less flames. Also, no, that's not historically accurate: Ubuntu didn't push GNOME until quite recently; they pushed their own Unity environment for a long time, and only recently switched to GNOME to better align with other distributions. And as mentioned in the comment you're replying to, no, "everyone" doesn't hate GNOME 3. Some people do. Loudness does not indicate quantity or proportion.)

Yes, I mean GNOME 3, which Just Works for me, far better than GNOME 2 ever did. Before that, I ran GNOME 2, which was far better than GNOME 1. And before that I ran GNOME 1.

The parallels between GNOME 2 and GNOME 3 are remarkable: GNOME 2 was also a major new version, much simpler with less configuration options and more focus on good out-of-the-box behavior, people hated it at first for daring to change at all, in addition to legitimate criticisms that they'd removed too much, and after the first couple of versions they found the right balance and the result turned out much better.


I always disliked Linux on the desktop until I discovered i3 (and now sway). Simple, productive, easy to understand and thus easy to extend. Gnome and KDE feel like bad attempts to recreate macOS or Windows.


It's been a decade of despair in desktops, and even more maddeningly for Linux.

Windows tried to commit business suicide with their desktop, and Linux could have made such massive strides in that time.

Instead the desktop balkanization is even worse, Ubuntu disrupted things with Unity, GNOME and KDE went through full version rewrites.

OSX is also stagnant (the finder is terrible, basic apps like media playing and image viewing are either missing basic features or saddled with forced ecosystem tie-ins, the mouse cursor and keyboard cursor disappears, basic shareware utilities are routinely made incompatible between releases and that's apparently by business shakedown strategy rather than technical reasons.


There are many Linux desktop environments that work great, I've been using them and am very happy with them. I am not sure what you mean "decade of despair".


Didn't they "work great" 10 years ago?

Then why did Ubuntu drop Gnome for Unity, then drop Unity for Gnome? Why the big rewrite of Gnome2? Why is wayland rewriting everything?

Multiple monitors sucks. Graphics drivers integration sucks.

Number of users is still incredibly small.

A decade of despair.


I also have a particular dislike for disjointed ui experiences. I also like sane and customizable desktop environments. From that perspective, having tried a whole of bunch of them (except tiling ones as I'm too lazy to memorize shortcuts), I have found XFCE to be the right balance (for me). Lightweight, sensible defaults, snappy and sufficiently customizable. I personally like Papirus icons + Numix style + Tahoma default font.


I had to disable the lock screen on GNOME/Ubuntu to get it to work so I feel you there. I am on Focal Fossa and after a full update the lock screen bug was actually fixed, although I have it disabled anyway. Just FYI.


May give gnome a try again after the update releases... I had it configured the way I wanted... but the lock screen not going away, and over the top of everything was a huge fail, and annoying the more I am using my desktop now (since wfh due to human malware).


Does anyone have any idea/info about the status of upcoming copy-on-write Linux filesystems like bcachefs or btrfs?


I hope wireguard improves it’s tooling. It feels a bit crude for me.


The tooling right now is pretty much directly exposing the underlying netlink api, so can seem a little clunky. wg-quick does go someway to making it more userfriendly.

WireGuard has been very pro-active at adopting useful abstractions, personally I've used https://github.com/WireGuard/wgctrl-go to build https://github.com/jamescun/wg-api a JSONRPC server for WireGuard, so expect tooling to improve over time.


I think that people are looking for Wireguard to be something that it isn't. It's a fancy Ethernet cable, that's all. You can build a lot of neat things on top of Ethernet cables, like full networks. But it's not the cable that builds the network, it's just a low-level tool for facilitating that construction.

If you want to do something like make a point-to-point VPN, then wg-quick is that layer on top. If you want to build a mesh network from endpoints authenticated with a third-party identity provider, you want something like Tailscale. Wireguard is just a low-level building block that handles the crypto in those use cases; making something high-level is left as an exercise to the reader.


I felt the same so I made https://github.com/naggie/dsnet -- just a simple CLI tool to add/remove peers, generating config.


Depends on what you are doing. For my use cases I strongly preferred the wireguard tooling to the OpenVPN tooling, but the Wireguard ecosystem is still missing many features.


I guess so. But at the same time. After setting it up to connect into my home network, I'm just surprised at how well it works.


You can use networkd to make it easier, if you like using systemd that is.


Care to elaborate?


You can declaratively manage wireguard using networkd:

https://wiki.archlinux.org/index.php/WireGuard#Using_systemd...


That’s nice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: