Hacker News new | past | comments | ask | show | jobs | submit login
Linux 5.10 (kernel.org)
354 points by jrepinc on Dec 14, 2020 | hide | past | favorite | 143 comments



I use a USB host-side switch [0] to quickly move my keyboard and mouse between my desktop and my laptop's docking station, since my 4k monitor already supports multiple inputs, and 4k capable KVMs are expensive and buggy.

In order to make the switch, I push one button on the USB switch, and then select the next input on my monitor (four button presses using the monitor's OSD). There's a great little tool called ddcutil [1] that allows you to send commands to your monitor via the i2c bus embedded in your HDMI/DisplayPort connection, so I tried to write a script to do the monitor input swap on either a key binding or after detecting the appropriate USB event.

It almost worked. My monitor only listens to the i2c bus of the selected input, so I need to run this script on both my desktop and laptop rather than just on the desktop. But apparently there was a bug in the way my (TB3) dock handled forwarding of the DP i2c data to the docking host. And by bug, I mean write forwarding was just not implemented in the kernel driver. So ddcutil would work on my laptop with a direct HDMI connection, but not when it was connected via the dock.

Anyway, long story short:

5.10 fixes this [2], so I'm particularly excited about this release.

[0] https://www.amazon.com/Cable-Matters-Sharing-Computers-Perip...

[1] https://github.com/rockowitz/ddcutil

[2] https://lkml.org/lkml/2020/9/1/300



Ha! This is more or less how my shell script works, but it looks way less kludgey. Thanks!


This is exactly what I've been wanting, but I didn't do any research into the topic. I love hacker news, thanks so much!


I've been using this on Windows for a few months now, and it works great!

Only issue I have with it is that it doesn't run as a Windows service - so you have to manually switch the monitor the first time so you can login and start display-switch. I see someone else has opened an issue in the github repo, so hopefully it gets resolved.



Been using this for a while, same situation as above between a MBP and a Windows PC - has been working fairly well!


I use a similar workflow, except I send the i2c stuff straight to the monitor without a docking hub in the middle. I can verify that this works fine on 5.5.11. I use a udev rule to detect a USB 3 hub being plugged in and plugged out, and ddccontrol sends the relevant i2c signals to the monitors.

I'm happy with the ddccontrol based switching with 2 computers (linux desktop + mbp), but I want to add an additional computer (mbp) to the mix, so my workflow won't work anymore.

Does anyone know of dual head DP KVM switches that work reliability? Googling shows a few options, but the they are old posts and things may have changed. Single head (1 DP monitor + MST) works for the linux desktop but Apple does not support MST on macbooks it seems like[1][2] (I can't even find an official page on apple.com with relevant details).

I have tried [3] but it is quite buggy with displays blanking out once every few minutes even with higher quality, DP 1.2 compliant cables (cable length < 1.2m).

[1] https://www.dell.com/community/Monitors/P2720DC-use-MST-func...

[2] https://old.reddit.com/r/MacOS/comments/gliki5/in_2020_why_d...

[3] https://www.amazon.com/gp/product/B089LY4S63/ref=ppx_yo_dt_b... (select the 2 monitor x 4 computers DP model)


> I use a udev rule to detect a USB 3 hub being plugged in and plugged out, and ddccontrol sends the relevant i2c signals to the monitors.

I always wondered if such things are unique features of Linux or available on Windows and Mac as well.


I wouldn’t say it’s unique to Linux but it won’t be (e)udev of course. I use a tiny shell script daemon to listen to console.log events to detect my webcam being turned on to trigger additional Philips Hue based lighting for example.

Windows should have something similar.


You can check the level 1 tech KVM switches that were designed to be more reliable than others:

https://store.level1techs.com/


Thanks. I will check it out. $750 (holiday discount, $850ish without) is pretty steep though, but I guess if nobody else can produce a high quality product they can ask for it.


I too tried KVM switches and saw they have not changed much in 15 years. No support for higher video modes. Buggy.

So I got a USB switch and that worked well for a while with multiple monitors (Dell) that you could assign shortcut keys to the buttons to switch inputs with 2 presses.

Now I have a curved Dell and it has a built-in USB switch that works really well that you can assign to inputs. It's a bit slow, switching USB takes as long as it does to switch video inputs, but it appears to be rock solid.

The other features, side-by-side and PiP are kind of a joke.


> switching USB takes as long as it does to switch video inputs

High-end KVMs get around this with virtual usb devices. Instead of switching the actual USB devices between computers A and B, they instead create a separate virtual USB device for each computer that always stays connected to each computer and is never switched. When you push the button on the KVM, you are not actually switching between computers, the KVM reads the USB input from the device and then recreates it on the appropriate virtual usb output device.

These KVMs are pretty costly. There are some complicated DIY solutions involving arduinos, but a software "KVM" (really just the KM, no V) like https://github.com/debauchee/barrier in most situations works just as well, and if the computers are connected via wired ethernet, is rock-solid and adds no perceptible latency.


I have a script to change the brightness of my external monitor using ddcutil. It worked until I installed a usb-c thunderbolt dock. Signal didn't seem to make it through. This must be the same bug.

To work around it I have to use a second hdmi cable directly connected. It defeats the purpose of having the single dock cable that I was hoping for when I bought it. :-/

Good news though. Maybe when 21.04 (Hirsute Hippo :) comes out I can finally get rid of the second cable.


Yeah, this is almost certainly the same bug. Some more info here [1] [2] [3]. In my case, using the second direct HDMI cable like you did was a non-starter because my Thinkpad only supports 4k at 30 Hz via the built-in HDMI port (!), vs. 4k at 60 Hz if you use the Thunderbolt port or a dock / adapter connected to that port. 30 Hz gave me eye strain after less than 5 minutes.

[1] https://gitlab.freedesktop.org/drm/intel/-/issues/37

[2] https://github.com/rockowitz/ddcutil/issues/11

[3] https://github.com/rockowitz/ddcutil/issues/146


Yes, I participated in that ddcutil issue.

You might try anyway. Believe I use the thunderbolt connection for my monitor, the second direct hdmi connection is used only for sending ddc commands. In other words, both inputs seem to be listened to with my Dell monitor.


That is one chonky serial cable... :-P

Your monitor must listen to DDC commands from inactive inputs. I think mine doesn't, because this whole problem came about when I realized that my desktop could ask the monitor to switch to the other input, but couldn't ask it to switch back.

Apparently a lot of computer monitors only listen to DDC commands coming from the selected input, while TVs almost always listen to them from all inputs, because they often need to do things like "switch to Roku input when the Roku asks, even if some other set-top box is driving the display."

If my monitor could do this, I could just run the input switching script entirely on my desktop (based on whether the keyboard has just appeared or just disappeared), and I wouldn't have to send DDC commands via the dock at all.


You can try it early with the mainline kernel, copy the config from /boot (ketnel wants it named .config ), and then make deb-pkg to build it. That'll make a .deb you can directly install and should work without issue in apt. It won't be identical tp hippo's, since they'll change some of the new options.


Same here, I'm also excited about this because I'll be able to use my script when docked!

If anyone's interested, my script is here: https://github.com/liskin/dotfiles/blob/home/bin/liskin-brig... Lets me interactively adjust the brightness using a very simple gui: https://twitter.com/Liskni_si/status/1320698852970778627


My laptop and desktop are configured the same way. The only difference is that I use a logitech gaming mouse that supports both plugged and wireless connections, so the USB transfer is just my audio source. Its fairly easy to swap the USB cable, but I'm lazy and had dreamed of a device like this. Thanks for sharing.


I'm using this one [0] for quite some time. Works like a charm. Ideal for gaming/work stations switching. Quite expensive though.

[0] https://kvm-switch.de/de/AP-552PSK.html


Is this for sharing a display with the desktop / laptop? Or just the keyboard/mouse?

For keyboard/mouse I use Barrier ( https://github.com/debauchee/barrier ) which works great for sharing keyboard/mouse between my laptop and desktop.


I'm sharing the display as well.

The USB switch electrically disconnects and reconnects any attached USB devices, moving them between a USB port on my desktop and a USB port on my docking station. I have a keyboard and a Logitech receiver connected to this switch. The advantage of doing it this way is that it works on any operating system without any additional software. Things that might not make it through a HID emulation layer (e.g. scroll sensitivity, firmware updates, RGB LED control(?), etc.) just work the way they would if the peripherals were connected directly (because they are connected directly). You could also add other USB devices to the switch if you wanted, like an audio interface, drawing tablet, thumbdrive, etc. (although you'd want to be careful about unmounting any drives).

The video source is set by the monitor. Doing this via the monitor's OSD menu works, and also doesn't require any software, but is annoying. So the small bit of optional software that I've added is a script that waits for the keyboard to disappear (which happens after I press the button on the USB switch), and then tells the monitor to change input. It saves me four button presses when it works, but I can still perform the switch manually if it doesn't.

The no-software approach allows this setup to work even if a friend shows up with a TB3 laptop without any special software and wants to work at my desk.


Did you think about having both the laptop and desktop connected via the dock and then using the USB switcher to switch it that way? Then there wouldn't be any messing around with monitor source.

Is that even doable? It's the type of setup I want.


Unless I've misunderstood what you're suggesting, I think you would need a (20? 40? Gbps) Thunderbolt 3 switch for this to work, because the dock only has one host port (the dock's topology looks more like a USB hub than a network switch). I suppose there are chips that can do this, but I've never seen a standalone Thunderbolt switch as a product, and if one existed it would probably be expensive.

In any case, my desktop is old and doesn't speak Thunderbolt. I'm not sure whether I can bolt it on with a PCIe card, but even if I could, configuring an nvidia card to reroute output from its own ports to a Thunderbolt add-on card sounds... not fun. And then you'd have to deal with the fact that the longest passive cable that the TB3 spec allows for is like, 2 feet long, and I have a standing desk...

It's certainly an interesting idea, and pulling it off would be an impressive technical feat.


Ahhh ok cool. I wasn't quite sure when I read your post but thanks for explaining. It's really interesting.

Wonder if it will work with my desktop and raspberry pi.


Barrier is good but breaks if you use more than one input language. Still using it for mouse but have to use two keyboards.


5.10 is the first time I've ever used a release candidate kernel to get some of my code to work. Specifically non blocking pidfd's.

Traditionally processes have reaped dead children by waiting on them, the problem with this is it's a synchronous operation that blocks a thread per child.

The only historical non-blocking solution was to listen for signals. Signals are great and all, except that the kernel will sometimes (unavoidably) merge multiple signals together, preventing you from getting a list of dead children. This is problematic if you want to know exactly which children died.

Linux 5.3 introduced a third system to listen for dead children, pidfd. It's a file descriptor which you can pass to epoll to know when a specific child dies. This is great, but it didn't yet support O_NONBLOCK, which meant that to use it in asynchronous (rust) code we need to create an extra thread which just sits around listening for it.

Linux 5.10 brings the ability to make this fd non blocking, so we can integrate this seamlessly into async event loops. Here's some code of mine that does that: https://github.com/gmorenz/async-transpiled-xv6-shell/blob/m...

Thanks to Christian Brauner, Josh Triplett, and Oleg Nesterov: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


If you're just looking for detection of child (owned) process death, I don't think you should need pidfds? Have you tried monitoring their stdio pipes (or a UNIX socket or similar that you dupe into it) for closure? To my understanding pidfds were meant to take care of the more general case of awaiting the termination of an arbitrary process that you don't control.


It's a shell, I don't necessarily have any pipes from the child. Moreover a child playing games with it's file descriptors shouldn't fool me into thinking it's dead and either cause me to block waiting for it to actually die or move on and never reap it.


Well you could create a pipe from the shell if you're spawning it (just create a pipe() when you fork() it), assuming it's inheriting your stdio so you can't already just wait on those. This is what I've done in the past.

That said, if your child process has genuine potential to acting adversarially, I don't believe even pidfds would be enough; you'll need to make sure it doesn't spawn e.g. a child process to break away.

However, I would guess that's not your actual use case? Rather, I'd assume you're trying to prevent against accidents. If you're in this boat like I'm assuming, I'd suggest thinking over whether the scenario you proposed is actually plausible. The only real-world instances I've seen a process playing "games" with file descriptors it did not previously open itself are (a) closing FDs after a fork, and (b) user code in a shell, doing something like exec 2>&3 or whatever. I assume you're worried about the latter given the former isn't relevant. In that case, you could work around it via dup2() after you fork(), by duping into a random high FD that nobody would be likely to mess with (like a random high number plus its own PID). This should solve it for practical purposes.

Not that you shouldn't use pidfds necessarily, but I'm just throwing these out there because I've been in this situation before and these might help you solve your problem on other systems that don't support that.


Yes, I suppose random high numbered fd as a pipe would be fine from a practical perspective.


> Once you get over the fact that the code makes your eyes bleed, it has a sort of beauty.

as a c2rust contributor, this made me smile :) thx friend!


I have an async-pidfd crate that might help you, to avoid having to open-code your pidfd handling: https://github.com/joshtriplett/async-pidfd


> [wait] the problem with this is it's a synchronous operation that blocks a thread per child.

Wouldn't the wait syscall wait for any child?


I just do non-blocking waitpid in a loop (until nothing is returned) after I get signal via signalfd.

Not sure why this would not work for the OP or miss some process terminations due to merged SIGCHLD signals? I guess using pidfd may be more straightforward, but I don't think this was some unsolved problem previously.


I mention that in a comment in the source, it wouldn't quite work because I intend this to work as a library and reaping children I didn't spawn would be problematic. Also mentioned in the source is that I could do a non blocking waitpid on every child I have, that just strikes me as slow and not clean code.

Of course if I want to deploy code like this in any serious way I'll have to implement another solution than non blocking pidfds...


Ah, library. New API looks great for this purpose. :)


Can't you keep a list of your children?


Sure... That's The iterate over my children solution mentioned in the post you are replying to.


I do the same thing, call waitpid with WNOHANG after getting the signal.


I enjoy perusing the Kernel Newbies post, well organized and occasionally explained in simpler terms: https://kernelnewbies.org/Linux_5.10


What is the state of "kernel devs getting older, and foundation having difficulty finding fresh talent" as of December 2020?

and what is the state of embracing more and more modern methods in the spirit of computer-aided software engineering? (not the BS 1990s hype of CASE, but genuine, meaningful, modern, and research-backed CASE).

(Credit where credit is due: git is one such example, but we need more such tools, shouldn't just stop at modernizing just the version control).


Everyone gets older, the alternative isn't as attractive :)

Seriously, the kernel averages about 200-250 new contributors every release (i.e. every 2 1/2 months). We are not starved for new contributors at the moment at all, do you think we are somehow not attracting new developers compared to other open source projects?


Hi Greg! :)

I was mainly referring to something I read many years ago (regarding new kernel devs), e.g., this from 2013 [0]. However, from your response, looks like that's not a problem.

[0] https://www.zdnet.com/article/graying-linux-developers-look-...


The media has talked about how the kernel development community has been slow to get new blood: https://forums.theregister.com/forum/all/2020/08/25/linux_ke...

I’m not sure that _this_ is the (a?) problem, but if someone were purely sourcing CNCF / Linux foundation / press releases they might think the project is heading for a day when the old guard keels over and we’re left high and dry.



FYI: the last kernel release of a year will normally become a new LTS kernel, see the "Choosing the LTS kernel" section from: http://www.kroah.com/log/blog/2018/02/05/linux-kernel-releas...


From [1];

Linux 5.10 LTS is likely the kernel to be used by Debian 11, Mageia 8, and others. For the likes of Fedora 34 and Ubuntu 21.04 we are more likely to see Linux 5.11 at play.

So may be another half a year or so for Debian. ( And hopefully soon after that for Synology NAS if you are interested in BTRFS )

[1] https://www.phoronix.com/scan.php?page=news_item&px=Linux-5....


>[...] and others.

It's never easy to know ahead of time, but my hope is that Linux 5.10 will give us Slackware 15.


The one thing I'm hoping for in every Linux kernel release is better support for Bluetooth and associated drivers. Besides some GPU issues, Bluetooth is the big thing preventing me from having a good consumer desktop experience on Linux.


The first thing I've done for the last few releases was immediately search the release log for "bluetooth".

I agree with you. Bluetooth works, but just barely, and it feels like it's getting worse.

Somewhere between 5.8 and 5.9 reconnecting became incredibly flaky for me (XPS 9300). Headphones will pair and connect fine the first time, but then completely fail to reconnect properly if I turn them off or walk out of range.

Sometimes I can connect to them again, but they don't get registered in pulseaudio, even after killing and restarting it.

Sometimes they will connect, then immediately disconnect again 5 seconds later (rinse and repeat until removed and repaired)

Sometimes they fail to connect entirely.

I end up consistently removing the device, restarting the bluetooth service, and then repairing fresh. In which case they work until the next disconnect.

---

It's hardly the end of the world, and it's a little amazing that my biggest complaint about linux desktop these days is flaky bluetooth, but it's certainly annoying.


For what's worth, I have the exact same issues on Windows 10. I've tried installing different drivers for the motherboard, nothing really helps.

It used to work fine, and then at some point I stopped being able to disconnect and reconnect the same device without power cycling the computer.


I had a similar problem -- the "pairs and connects fine but then fails later" sounds very much like https://bugs.archlinux.org/task/68346, which is now fixed!


The bluetooth support certainly could be improved in many respects, although I have to say that it does seem like things have been getting considerably better as compared to ~4 years ago when I first started using bluetooth on Linux.

I've also wondered if a lot of the bluetooth problems on Linux are actually caused by the desktop manager.

Most of the problems I've personally experienced have seemingly been tied to using old versions of Gnome on Ubuntu. In fact, Gnome in general has always been rough for me on bluetooth; the bluetooth connection app still needs some love, but I've noticed it being more reliable with recent versions. I particularly noticed this when I was running a newer version of Gnome on Fedora 31, but then running an older version of Gnome on Ubuntu 18.04 (for work). The older version of Gnome's bluetooth was considerably worse.

For about the past year, however, I've been using Fedora / KDE on desktop and OpenSUSE / KDE on my laptop. This has been by far the best bluetooth experience I've had on Linux. Thank you to whoever wrote the KDE bluetooth code, because it is consistent and reliable.

So I don't think it is totally a kernel problem. YMMV depending on what cards/adapters you're using (I'm using a mix of Intel and some USB sticks from a fly-by-night vendor).


Yes, another thanks to the KDE Bluetooth applet authors, works great.


What doesn't work with Bluetooth today on a distro like Debian 10?

I have Bluetooth 4.1 built into my WiFi card, over the last year the only issue I have had was manually having to switch to HSP mode on my headphones to use the Mic. Switching from A2DP to HSP was easy, just click Audio in the settings menu in the upper right corner in Gnome.

Besides that, Bluetooth mice and headphones &keyboards just work.


On my Dell 13in XPS laptop from 2018 (9370), Bluetooth used to allmost work, but I had to manually pair my headset every time. Now, with the latest Ubuntu, it often connects automatically, but then looses connection immediately, or fails to see that the device can play audio. After some fiddling and turning the device on an off it will work if i start playing audio right away. It will then work for a long time. The amount of fiddling and rebooting of headphones seems to grow with every new Ubuntu release. It may also be a user space error, but in any case, Bluetooth is far from "just working", unfortunately.


Are you using an Intel wireless chipset or the terrible Broadcom one that Dell used to use? Not sure what Dell was using in 2018, but I replaced the garbage Broadcom chip in my 2015-era XPS 13 with a much more reliable Intel model, and most of by bluetooth problems were immediately fixed.

That said, the replacement process is pretty fiddly...Dell likes using lots of very small screws made of very soft metal.

I've also found that Gnome's bluetooth handling varies from barely acceptable to confusingly horrible. KDE's bluetooth handling has been way more reliable for me across multiple machines and distros. So the problem may very well be in userspace.


I have the XPS13 9380 (also 2018), and it came with an Atheros WiFi chipset (Killer, I believe). Looks like the Bluetooth is on separate hardware here; lsusb reports it as a "Foxconn / Hon Hai" chipset.

I don't really use BT for anything, so I can't comment on its quality. WiFi has been completely fine, though. Either way, I believe both the WiFi and BT hardware are soldered in on this model.


If you want that to happen automatically, "you can append auto_switch=2 to load-module module-bluetooth-policy in /etc/pulse/default.pa"

(from https://wiki.archlinux.org/index.php/Bluetooth_headset#Switc...)


Many bluetooth headphones support microphones with A2DP, though. And if yours do, you likely want to use that rather than HSP, because HSP has much worse audio quality.


Do you have some reference I can follow about that? Because I thought that A2DP not supporting microphone was a Bluetooth protocol limitation.


https://en.m.wikipedia.org/wiki/List_of_Bluetooth_profiles#A...

That discusses audio flowing in both directions.


From the article:

> Each A2DP service, of possibly many, is designed to uni-directionally transfer an audio stream in up to 2 channel stereo, either to or from the Bluetooth host.

This matches what I has heard: Bluetooth devices with A2DP profile can either receive or send high-quality audio, but not both at the same time.

I don't know if there are Bluetooth headsets that have separate A2DP devices for input and output sound, I have never heard about them, but I would be very interested in knowing of their existence.

Also from the article:

> These systems often also implement Headset (HSP) or Hands-Free (HFP) profiles for telephone calls, which may be used separately.

This is the common case for Bluetooth headsets, the Bluetooth manager switches profile when the microphone is needed, but that greatly reduces audio quality.


What this setting does, is that when an application requests the Microphone, then it automatically switches to HSP, and once the source is destroyed, it switches back to A2DP.


I'm aware, but you still don't want to use HSP and downgrade your audio output to HSP if your microphone doesn't require it.


Hmmm, maybe I don't get it.

When I am using the headphones, and IF an application requests microphone, THEN I do want this switch to happen automatically. After the Microphone is not needed anymore, I want it to switch back to the HQ A2DP.

Am I missing something here?


HSP makes the headphone quality much worse, and mono. Even while using the mic, that's not ideal. You might not notice if you're just having a conversation, but if you're sharing a video or watching/playing a game, you want the audio output quality.


I understand the quality issue, of course I also notice it.

But how else would my mic work? If it stays on A2DP, then there is no Mic input, so I used to have to manually switch it to HSP. With the solution I linked above, this is not manual.


Your headphone would have to support A2DP input. Some headphones do.


Oh alright, that would make a difference then!

So... I am not sure it does support it, TBH. I have a Bose QuietComfort 35 (II), and on linux I definitely have the low quality HSP+ mic (other thatn the quality, works well) and for listening only, A2DP is working too.

On Android though, calls are _MUCH_ higher quality with the mic on. (still somewhat different than from output-only, but less degradation than under my desktop linux)


I'm on latest Fedora and sometime this year headphone volume synchronization stopped working for headphones that never had this issue for me since release (Sony WH-1000XM2). Volume synchronization continues to work as expected on Android.


I have always had trouble connecting bluetooth headphones into my Linux Mint installation.


I had bad experience on linux-bluetooth mailing list when I tried to add battery reporting to bluez.

Saying that it's hard to get through is an understatement.

I can say I can let this pass if the maintainer was really an overworked volunteer who shared his work on Linux with his day job, but not if the guy works for the biggest electronics company in the world.

I believe Linux core devs need to put stricter criteria on system maintainers if they are working on company's payroll.



Also their top 10 most interesting features overview: https://www.phoronix.com/scan.php?page=news_item&px=Linux-5....

Nintendo switch controller support sounds interesting!


Due to the telemetry I haven't bothered playing any official switch games because I'm worried about getting banned.

At this point it makes more sense to just put linux on the switch and use it as a tablet and play the games on my desktop since that's already hooked up to the living room display anyway.


You're not going to be banned for connecting your joy cons to your Linux PC. They only ban people for modding their Switch to play pirated games.


Or for running homebrew, which is what I did. There's no openttd on the switch store.


you can use DNS server that will not resolve any nintendo domains

https://nh-server.github.io/switch-guide/extras/blocking_upd...


> Nintendo switch controller support sounds interesting!

I have been successfully using the DKMS-based driver with my Pro Controller for some time already. But it's always nice to get rid of a moving part in the stack.


Does anyone know whether Nvidia CUDA works with this latest kernel? I know for general use cases the Nvidia driver appeared to work again somewhere mid-November, but there still appeared to be issues with the nvidia-uvm kernel module.

I’ve seen hints that this has been fixed in 5.10, but wasn’t able to find any official source. Anyone that knows more about this?


Supposedly this was 'fixed' in 5.9 with experimental nvidia-drivers (455.45.01)


OK guess I'll have to wait until those land in Debian bullseye and upgrade to that once it's there.


Does that kernel make wifi work on the XPS 13 9310?


I've been working actively on making it work (see ath11k mailing list). We're pretty close now as the connection is stable, but the changes aren't going to make it into 5.10 I'd guess ;). There was a new test branch released overnight that should work for you.


Depends on which wireless card Dell shipped in your 9310. The Intel card should already be supported. The Killer WiFi AX500 2x2 does not have all the supporting patches merged in, as of this morning I'm carrying 27 patches to make it work.


It depends, are you looking for WiFi 6? If so, it's supposed to. If you're still on Dell's older Intel chip, and it keeps crashing every 15 minutes, causing you to restart your network, I've found that pinning linux-firmware to the version from August has given me the best results. Anything else maxes out around 20-40mbs and crashes constantly.


I've been having the issue you are describing on my XPS 9300 with Killer AX 1650 ever since my BIOS was updated in November and I just upgraded my BIOS on Dec 9th to 1.4.1 without realizing I can't downgrade. Do you have any suggestions on how to fix this problem? Thanks!


Are you certain you cannot downgrade? gnome-firmware seems to allow it. I have to admit, I'm just on 1.4.0 though, and haven't tried to upgrade yet. Does it somehow block downgrading?


Bummer that futex2 didn't make it in yet.


Has there been another version of futex2 published to the mailing list recently? I’m super excited about it.


This being an LTS release made me wonder: why do distros with long support cycle like RHEL not come with LTS kernel?


I don’t know the reasons of RHEL, but I do know that the current Debian stable uses kernel 4.19, which is an LTS kernel.


This gets us closer to 5.11 which is gonna be a major deal for gaming.


As a linux gamer the most exciting part about 5.10 for me is open source AMDGPU support for Vulkan for the old AMD HD 7000 series cards abandoned by AMD (ie, HD 7950). Because Valve went back on their promise for Vive VR OpenGl support and only supported Vulkan my Vive HMD has been a paperweight on linux until now.


it's a pity that amdgpu DC still doesn't have support for analog outputs.

I have an old VGA hooked up as an additional display just because i can, but i have to set `amdgpu.dc=0` to make it work.


What's going to be in 5.11?


Apparently some features that will allow WINE to capture a lot of the Windows syscalls used for a lot of DRM and anti-cheat, meaning games with those might work through WINE/Proton now:

https://www.gamingonlinux.com/2020/10/collabora-expect-their...


Ah they're emulating the windows syscall interface, that's not so bad. I was worried they were making it easier to install rootkits, that seemed pretty weird.

Eventually Linux and NT will converge until the language runtimes will run on both and apps will run on both. Which works better will be largely a question of linker flags.


it's not emulating syscalls, but it's adding the ability to trap the syscall instruction and provide a handler in user space


As If NT and Linux will become implementation detail. Odd


Hey, it seems like this was clarified to not be work towards anti-cheat after all, and seems to have been a misunderstanding.

https://www.reddit.com/r/linux_gaming/comments/jtz08q/collab...


I would assume syscall user dispatch. "Syscall User Dispatch allows for efficiently redirecting system calls and to be done so for just a portion of the binary. The system calls can be redirected back to user-space so they can be handled by the likes of Wine."

https://www.phoronix.com/scan.php?page=news_item&px=Syscall-...


Can't wait and can't say enough good things and thanks to the guys behind Proton !


Not just Valve and others, but much of the heavy lifting on Wine has been done by Codeweavers. If you are in the position to, one of the best ways to support them is to get your business to pay for their products (crossover, consulting, etc) if you are using Wine in a business setting.


Maybe by 2030 we’ll have decent trackpad and HiDPI support


The whole usermode side of input handling depends on a project (libinput) that's been developed by a single guy over the last decade. He asked for help numerous times, reminding everyone that libinput has a bus factor of one, and received none. So either go ahead and help, or stop complaining.

https://www.youtube.com/watch?v=HllUoT_WE7Y


I'm pretty sure HiDPI is solved from the kernel's perspective; pixels get to the display, it's just that the (userspace) toolkits have lingering issues deciding what pixels to draw.


Been using both without issue for five years, trackpad for longer.


HiDPI seems to work without issue for me now when using Wayland and Gnome.

Trackpad support on my XPS13 is acceptable now but not macOS standard IMHO. Crucially though I notice it improving over time.

Both of these are userland rather than kernel concerns though aren’t they?


I've never had any issues with trackpads.


Could the title be more descriptive than “Linux 5.10”?


What exactly are you looking for?


The email text was just 7 paragraphs of Linus explaining how he doesn't like late pull requests, so I think the title matched it perfectly.


"Linux 5.10 released" maybe


Not trying to be rude here, but how many people do actually care about kernel releases?

I've used a bunch of distributions, even rolled my own when I was a teenager. Kernel version back then was around 2.6, and a new release didn't change anything for me as a mostly desktop user.

Is it different for many professionals? Are there people waiting for a new release, are there big problems to be solved or "Linux is ready" and it's mostly maintenance nowadays?


I enjoy reading about new kernel features, and 5.10 is somewhat more newsworthy than others as it's a Long Term Support (LTS) release.


One of the great things about using linux is the kernel really does matter to both users and developers, and every release tends to bring relevant goodies and fixes.

Due to the unparalleled transparency of not just a GPL source but a completely open development model, there's a whole lot more than just a version number to appreciate and investigate if so inclined.

Also having the bulk of drivers in-tree, kernel releases affect a lot more than just core stuff like the scheduler, memory management, and interrupt controllers.


New devices need new drivers or fixes for existing drivers, an anecdotal example is that my new T14 the lid close wasn't detected correctly and it was fixed with a new kernel release, so I was expecting a new kernel release, I'd figure that is the most common reason, linux is ready, but new exciting features are still being actively developed


I am looking forward to it because it is an LTS release that has all the latest io_uring enhancements.

io_uring looks like it will be a big deal, it will probably be a while before it really hits mainstream, but I was waiting on 5.10 before trying it out.


In this case I am excited as this is the first LTS with some new drivers I am waiting for (adafruit MCP2221A usb GPIO board, new drivers expose USB GPIO)


An eye on the Linux kernel is a good way to keep up with what's generally going on in computing. If you haven't built a kernel in awhile, go do it and marvel at the new options. When you see something you are interested in, read up and try it, and you'll be ahead of the proverbial curve.


It matters to kernel API users (see io_uring), gamers and heavy virtualization users to name a few.


I'm waiting for hardware support for my particular laptop model, added in this release.


In 2019 I was compiling my own kernel to include a fix for my logitech racing wheel until the release was out. Also made me find out that its impossible to use the nvidia proprietary drivers on a prerelease kernel so I had to buy an AMD gpu as well.


> Its impossible to use the nvidia proprietary drivers on a prerelease kernel

I managed to on 5.10-rc6? Had to upgrade to the "shortlived" version of the driver, but that went smoothly.


I think it depends on how different the kernel is. At the time I did it, nvidia had no published drivers that were compatible with the linux RC


That seems likely and would make sense.


Same here. Its really exciting to see something you are waiting for go into the kernel. Are you waiting for the lenovo legion and patches by chance?


I do, but then I like to port them to bespoke ARM systems built using complex FPGAs.


I do care. uring mentioned by others, btrfs changes, bcachefs foundation work, anything tracing related, etc are interesting to me.

The whole tracing story in Linux has changed in the recent years. It's really useful in practice.


I do, that's how I learn about how some new hardware work or how a problem I had to do workarounds for was solved.


Changes from one release to the next probably won't matter very much to most individual users, unless something specific you're using is getting support added, but you'll usually find at least one or two major features between LTS releases, including stuff that impacts ecosystems beyond just GNU systems.

Wireguard and exFAT are a couple of notable ones that have been mainlined since 5.4, with the latter impacting basically anything that uses SD cards (as it's a required part of the SDXC spec)


Depends on your hardware. For new AMD cards, they a lot of times bring critical bug fixes. Same applies to a lot of other cases.


The last "big deal" for me was when Wireguard got merged. Oh baby, that was a good one, I think I was excited about that release for like a week - cause I use the shit out of WG - WG all the things. And that was this year (or last?). So, I'm just a +1 to someone who has cared about a recent release.


Currently top post on this thread details an issue fixed I've been facing for years. So yes, interested.


Even if you're running a stable distribution, kernels get backported and so do fixes. Is every kernel release interesting to everyone? No, not any more. Is it worth looking at to see what is happening? I think so.


I care about new APIs, in special io_uring, https://boats.gitlab.io/blog/post/iou/


User extended attributes (xattrs) for NFS (landed in kernel 5.9) was a feature some people, including me, were waiting years for.


Well, I care because this is the first kernel release that supports my laptops WiFi chip (the Killer AX500-DBS, which is a QCA6390 based chip)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: