Hacker News new | past | comments | ask | show | jobs | submit login
Linux 3.17 (kernelnewbies.org)
102 points by diegocg on Oct 13, 2014 | hide | past | favorite | 33 comments



Thunderbolt hotplug is supposed to be handled by the firmware. But Apple decided to implement thunderbolt at the operating system level. The firmare only initializes thunderbolt devices that are present at boot time. This driver enables hotplug of thunderbolt of non-chained thunderbolt devices on Apple systems with a cactus ridge controller. This first patch adds the Kconfig file as well the parts of the driver which talk directly to the hardware (that is pci device setup, interrupt handling and RX/TX ring management).

Finally. Been waiting a while for this. The fact that Apple designed the hotplugging at the OS-level infuriated me for the longest time.


It seems like a legitimate tradeoff; implementing hotplugging at the OS-level gives you DR capability and far more control over the process. I'm not saying they made the right decision, just saying that it appears to be a legitimate one. Obviously the disadvantage is that hot-plugging doesn't work in other operating systems when run on Apple hardware, but obviously that's not Apple's focus or interest.

Intel was willing to certify Apple's devices, so clearly the specification doesn't actually require hot-plugging to be implemented in firmware. So I wonder what the author bases that assertion on.


> Obviously the disadvantage is that hot-plugging doesn't work in other operating systems when run on Apple hardware, but obviously that's not Apple's focus or interest.

Ah dang! I have an official Thunderbolt to Ethernet adapter on my MacBook Pro that won't be detected in windows 8.1 running in bootcamp* unless I reboot. And now I know why.

* The next Visual Studio looks NICE in high dpi.


Maybe Intel just fudged the certification requirements for Apple?


> The fact that Apple designed the hotplugging at the OS-level infuriated me for the longest time.

On the flip side, if there's an issue with the protocol, it can be easily fixed with an OS update. Consider that in contrast with the recent USB security hoopla.


You do realize that Thunderbolt has direct memory access right? It doesn't even need drivers to do malicious actions. It can dump and modify memory at will.

In fact all the USB security hoopla also affects Thunderbolt devices, which can also be reflashed without physically modification.

Thunderbolt is even less secure than firewire, which at least required some support from the OS to dump memory.

USB is an iron fortress compared to those kinds of attacks.


Linux systems usually get their random numbers from /dev/\[u\]random. This interface, however, is vulnerable to file descriptor exhaustion attacks, where the attacker consumes all available file descriptors, and is inconvenient for containers. The getrandom(2) syscall, analogous to OpenBSD's getentropy(2), solves that problems.

This is the big one for me. No more directly opening /dev/urandom on Linux and it works in chroot. When will Linux distros begin shipping this kernel?


You're not going to be able to adequately rely on this feature by itself for years, probably. I'd guess a 5-6 year window before it's widespread.

Considering there isn't even a LTS kernel/system out with it coming anytime soon, I'd suggest you keep your fallback code to check `/dev/urandom`, even if FD exhaustion/chroots are a problem.


I had thought Ubuntu 14.04 backports will be running this kernel soon...

However, this isn't really for you. This is for your library; this is what arc4random() is meant to use. (Of course, the name is legacy, you should be using ChaCha20 in that.)


Ubuntu is the least of my worries; Ubuntu backports is an even smaller blip on the radar. What about all the software running on CentOS and RHEL 6.x systems? Those aren't going away soon.

And yes, I understand the implications of what this exact system call is meant to do; explanation much appreciated nonetheless (clearly it could have mislead some people).

My only point is this system call is a step forward, and mitigates some known problematic attacks or edge cases - but nonetheless, we'll need adequate fallbacks for unsupported systems for many, many years to come in our libraries.

Thinking of it this way, I'm somewhat amazed Linux has gone this far /without/ this feature. If I had to hazard a guess, I'd say it was either A) never suggested, which I find unlikely, or B) your typical NIH/"this is nonsense and not needed" response you sometimes see to sane things in the Linux world (despite them implementing a thousand new things every two months, that only 5 people in the world could ever use, and inevitably will result in a handful of CVEs - their priorities are Serious Business, after all.) But maybe I'm just a cynic at this point.


Well hopefully Linux arc4random will abort() if reading /dev/urandom fails...


USB/IP is a project that provides a general USB device sharing system over IP network. To share USB devices between computers with their full functionality, USB/IP encapsulates "USB I/O messages" into TCP/IP payloads and transmits them between computer. Original USB device drivers and applications can be also used for remote USB devices without any modification of them. A computer can use remote USB devices as if they were directly attached.

Considering the recent BadUSB exploits that have come to light, is this really something we want? It just seems like the risk could outweigh the benefit.


It would normally be implemented as kernel modules (usbip.ko, usbip-host.ko) and those would not be autoloaded on device recognition, because who knows whether a given device wants them? So they would be loaded when a userspace tool calls for them.

The protocol uses port 3240 by default, and so you can disable it with a firewall at either end or in between. Though I think this really calls for an encrypted path and some sort of identification and authentication.


Since this needs to be enabled in the kernel at compile time, it really depends on what your favorite distribution chooses to include.

If it does end up being in the Big Two (Debian-flavored, RH-flavored) it really doesn't make sense for servers. If it was enabled by default this seems like something that you would need to do some sort of 'handshake' between to prevent someone from just mounting a USB device willy-nilly as they see fit. Complete speculation on my part though.


It absolutely does make sense for servers. One frequent pain point with virtualized systems is that (at least in a VMware vCenter DRS-enabled cluster), you never know what physical host a specific VM will be on at any moment. This is a problem for software that requires a USB license dongle (many FlexLM-managed software have this requirement).

Anyway, in this situation, you would have one physical, bare-metal linux "usb dongle server", which then shares out its USB devices to one or more other linux VMs. After doing this, VMs can migrate between physical hosts without losing access to their license dongle.

There are purpose-built physical USB->IP devices out out there now, but they're quite expensive. This new functionality would allow admins to emulate this functionality for a fraction of the cost of a purpose-built device.


Ah, that's a use case I didn't even think of. Makes sense in that situation then. I rarely deal with things that require USB license dongles, but I can see that as being a pain this may alleviate.


Why were you downvoted?


> Graphic "render nodes" feature enabled by default

This had me excited, until I read the linked description and found that

> It’s also important to know that render-nodes are not bound to a specific card. While internally it’s created by the same driver as the legacy node, user-space should never assume any connection between a render-node and a legacy/mode-setting node. Instead, if user-space requires hardware-acceleration, it should open any node and use it

So my program opens a rendernode and it might run on the discrete GPU or it might run on some shitty integrated GPU. That's kind of useless.


> This release adds support for Xbox One controllers.

Why is support for such thing needed by the kernel?

Also, why are there some things that my CPU can do (virtualization) that require kernel modules?

Isn't virtualization more important than an xbox controller?


One upside of kernel modules: you can build a very small kernel without having a ton of features. If you actually need the xbox one controller support - you can enable the module. Really useful on small footprint devices.


The only downside is that the kernel doesn't have a stable API, so the open source code of kernel modules becomes gradually more useless if not actively maintained as it won't compile against the kernel headers anymore requiring more and more work to fix.


The Xbox One driver is also in a kernel module: xpad.ko.


Got it. So some kernel modules are "official" and part of the release notes?


90% of all modules in existence are part of the official kernel release. So, as they ship in the same bundle, they also use the same release note.


Some kernel modules may not be distributed as part of the kernel for license compatibility reasons, or just by preference of the authors. In some cases they may only be distributed in binary form, with an open-source "wrapper" that is compiled to match your local kernel, like nvidia video drivers.


Also worth mentioning is that large amounts of the linux kernel can be either 'built-in' or built 'as modules'- I think most distros prefer modules where possible.

Run shell command 'lsmod' to see all the modules you currently have loaded, and check out the .ko files under '/lib/modules' to see the modules provided.


> BTRFS - Adjust statfs() space utilization calculations according to RAID profiles

Does this mean df will finally show the correct values for btrfs raid volumes? Does anyone know if the statfs() syscall is what df uses? There is no call to statfs() in the df source I found here [1].

[1] https://searchcode.com/codesearch/view/17947198/


> Does anyone know if the statfs() syscall is what df uses? There is no call to statfs() in the df source I found here [1].

Yes, it looks like df uses that:

    # strace -e statfs df /tmp
    statfs("/tmp", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=4128490, f_bfree=4084457, f_bavail=3874742, f_files=1048576, f_ffree=1048564, f_fsid={1972973117, 2058115910}, f_namelen=255, f_frsize=4096}) = 0
    ...


https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....

===

This has been discussed in thread: http://thread.gmane.org/gmane.comp.file-systems.btrfs/32528

and this patch implements this proposal: http://thread.gmane.org/gmane.comp.file-systems.btrfs/32536

Works fine for "clean" raid profiles where the raid factor correction does the right job. Otherwise it's pessimistic and may show low space although there's still some left.

The df nubmers are lightly wrong in case of mixed block groups, but this is not a major usecase and can be addressed later.

The RAID56 numbers are wrong almost the same way as before and will be addressed separately.

===

I'm most interested in the deadlock fix which finally made it in. https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....


Yes, the deadlock fix really is appreciated. I had to turn off compression on all my machines :-(.


Does anyone know if DisplayLink compatibility has been re-introduced? It seemed like it was in there for a while and then taken out... Ormy foo in getting DisplayLink devices to work out of the box is weak.


2.0 should maybe work, 3 probably never :(

http://displaylink.org/forum/showthread.php?t=1748


So...another version of linux..great




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: