Hacker News new | past | comments | ask | show | jobs | submit login
The libkern C++ Runtime (developer.apple.com)
80 points by ingve on July 11, 2018 | hide | past | favorite | 47 comments



Compare with the NeXT "Driver Kit" library:

http://www.nextop.de/NeXTstep_3.3_Developer_Documentation/Op...

Very similar layout, but it used Objective-C instead of C++.


    #define super    IOService
    #define fBlastIForgot [..]
    OSMetaClassDeclareReservedUnused
    OSMetaClassDeclareReservedUsed
This is so much nicer than just function pointers, base members and container_of!


I wonder why they deemed templates to be "insufficient or not efficient enough for a high-performance, multithreaded kernel".


I suspect (never written anything substantial) that at least part of it relates to the age of a lot of xnu.

Basically think about the state of c++ and c++ compilers two decades ago. I wouldn’t be surprised if that factored into it.

As others have said codesize can also be an issue - it’s super easy to make templates produce insane amounts of code, especially in older compilers.


Exactly. OS X 10.0 shipped with GCC 2.95, it wasn’t uncommon at all to avoid templates and exceptions back then.


Probably code size. Any two template instantiations, even if they produce identical machine code or in-memory representation, will generally be output twice due to language requirements like needing distinct memory addresses and linkage for things like functions, static data members, static locals, RTTI, vtables etc.

Eliminating code bloat with templates requires careful judgment about where to put type erasure and how to factor your code


C++11 added extern templates which can easily solve the bloat problem. If a template consistently has the same type like, say, std::vector<int>, you can just extern it and tada they all share the same generated implementation ( https://isocpp.org/wiki/faq/cpp11-language-templates#extern-... )

So even if the linker fails to de-dupe, you're still able to manually fix it pretty easily without giving up on templates entirely.

But libkern predates C++11, so decisions made by that team at that time are largely obsolete and should be heavily re-evaluated rather than blindly followed.


A lot of the xnu c++ dates back almost two decades (maybe more?), and abi compatibility kind of screws in terms of changing decisions like this.

You can’t really just say “update to a newer version of the language” when you have both API and ABI compatibility constraints.

For source you can deprecate APIs, etc so future versions would have an early warning the source changes would be necessary.

But that doesn’t help shipping kexts, for that you need ABI stability, which really puts the hammer on changing/updating the features that you use. Many of the C++ features cause exciting binary compatibility problems, and make it super easy to accidentally change the ABI :-/


My point was just that anybody using C++ now shouldn't treat language-feature advice from 20+ years ago as anything particularly relevant. It's outdated and obsolete and should be treated as such. Sure if you're working on an obsolete codebase in bare-bones maintenance mode then you're kinda stuck, but most of us aren't.

That aside yes, you totally can just update to a newer version of the language. If you are trying to maintain C++ ABI stability then your life is harder, yes, but it's no harder than it already is when you just upgrade compilers or deal with people building with other compilers (and most everyone ships C ABIs anyway to avoid this entire category of problems - extern "C" still works great in C++17). But you are still completely free to use newer features in the implementation itself which doesn't impact API or ABI stability in the slightest.


Hand-coding multiple versions of a function or class to accommodate various types would be just as bad in terms of code size, no? Plus it'd be tedious and error-prone to do this.


In practice most of the time you would just write a generic version using one of the other mechanisms. Virtual functions, void* and casting, type ids, or whatever.

Just like everything else, it's a time/space tradeoff because it generates less optimal code in exchange for a smaller binary size.


There's actually a pretty well established design pattern about wrapping one of those methods with the template class, thereby allowing type safety checks and such whilst still minimizing bloat.


It would probably expose too much of the implementation. Also, instantianted methods would live within the downstream libraries, making shared library linking fragile and un-upgradeable. Also, you could create your own specializations potentially circumventing safety/security features.


I was pretty sure it was because getting people to write drivers in Objective C for DriverKit was never going to reach any mass appeal, so better make it C++

http://unix.superglobalmegacorp.com/cgi-bin/cvsweb.cgi/objc/...


Yep. Everyone writing drivers for Windows/Classic MacOS/UNIX was doing it in C. And Apple wasn’t particularly confident that ObjC would appeal to even application programmers (hence the Cocoa-Java push, and the never-finished ObjC “modern” syntax)


They took out templates, inheritance, exceptions? What’s left to make it not just C?


Plenty: classes (including constructors/destructors, virtual methods, etc.), operator overloading, patterns like RAII, ...


Stronger type safety, fewer implicit conversions.


> fewer implicit conversions.

Surely C++ has way more implicit conversions than C, what with having all of C's and ctors defaulting to converting?


Only if developers are too lazy to use explicit.

I was referring to C implicit conversions that aren't valid in C++ code, like void* to other pointer types.


The good old times when Apple let devs fiddle with their OS.

These APIs are already deemphasized, so I wouldn’t be surprised if they were to deprecate/remove them altogether when they release the ARM version of macOS. They’ll probably do it with the update that introduces UIKit on macOS (as Craig Federighi said on this year’s WWDC) to divert the attention. Sneaky bastards, but their stuff still sucks the least ¯\_(ツ)_/¯


It's not sneaky just because you're not looking. I have found the CoreOS group to be guarded in their responses to future directions but never sneaky. There have been multiple times where an API/design/etc change in a WWDC session didn't make sense. I would follow up in the labs and be told the reason is because X is happening in the future. Granted I'm not always told. In that case a diff of kernel sources between releases along with some years working in the XNU kernel is enough to figure it out. Sometimes you're just lucky and hint it in the actual session. For example the 2013 WWDC Session titled "What's New in Kext Development" (https://asciiwwdc.com/2013/sessions/707) leaked SIP well before it was officially announced. They key line was:

So in the future, we are going to tighten down access to the system hierarchy, the whole hierarchy down from /System and everything in there.

Another example of them sharing future plans was user space networking. I forgot what year it was but in the session they noted something about network kernel extensions (NKE) going away and to use Network Extensions instead. NKEs weren't the best but for Apple to send all the effort to recreate the 'same' thing in a new framework was odd. A visit to the labs and you were instantly told of the move to user space networking.

One last example. Apple ships in the default OS a number of third party mass storage kernel drivers. Take a look at /Library/Extensions on a new install. This ensure when you try to install that new OS or boot that new OS, you can see you're drives. Apple likely needs to work with those third-parties to make that happen.

I understand why it might appear sneaky but I don't think that's the case.


Yep, there a couple of interviews where Chris Lattner states that Objective-C 2.0 and later improvements where already a long term roadmap to what would eventually become Swift.


You can still fiddle with macOS. Though it's getting harder, with SIP and whatnot.

Still, disturbing that the URL includes "archive".

OTOH, they recently open sourced the iOS flip side of what's open on macOS. So who knows.


You can load kernel extensions with SIP enabled: the extension just needs to be “soft-approved” (i.e. you need to be in the developer program and explain why you need a kernel extension to Apple; no stringent requirements like the App Store review process obviously). With SIP disabled you can load extensions that have not gone through this process.


Certain devices are still going to need loadable kernel modules to be supported.

For example USB to Serial devices, or custom media devices, and more. I really don't expect kernel modules to go away.


Exactly right. Often for devices, but also for software (usually enterprise). Here is a list of kernal extensions compiled by the macadmins community: https://docs.google.com/spreadsheets/d/1IWrbE8xiau4rU2mtXYji...


I think kernel modules will go away at some point. Having no third party Kexts would increase the security of the OS for in use systems. That's a nice way of saying not all third-party Kexts are created equal.

I could see an argument where moving existing hardware Kexts to user space is easier because IOKit uses the libkern C++ Runtime. The OO design of IOKit may lend itself very nicely to the driver approach BarrelFish takes (http://www.barrelfish.org). The real hard one to move to user space would be third-party filesystems. That's mainly because of dated VFS architecture used in *NIX systems. I could see Apple completely moving away from that at a future point too.


I found Serial[1] a while ago: a decent terminal application that does not require any drivers / kernel modules to support USB-to-serial devices.

[1] https://www.decisivetactics.com/products/serial/


I wonder how that works?


macOS comes with very good USB support in user-space that works without any drivers.

https://developer.apple.com/documentation/iokit/iousbinterfa...


libusb or a similar userspace USB construct? You don't need anything privileged to write an USB device driver.


How many USB<->serial adaptors are there out there? Since 10.9 (IIRC), they've included their own FTDI driver.

I sure hope you're right though. It hasn't happened yet thankfully!


It's what they did with the release of OS X 10.0, when they dumped the Objective C runtime/DriverKit for the C++ runtime/IOKit.

I mean, who needs old existing drivers anyways on a new platform?


The UIKit on macOS API is supposed to be coming to developers next year. I don’t see ARM Macs or the deprecation of an API like this with no warning happening by then.


Ideology disguised as logic. Basically NIH. Apple engineers could have made C++ RTTI, and other other native features, suit their requirements. Instead they created a bastardized dialect of C++ which requires new domain knowledge to use effectively.


I believe the dialect of C++ that Apple uses for IOKit is based on a dialect called Embedded C++, which was created an industry standard: https://en.wikipedia.org/wiki/Embedded_C%2B%2B


Banning the use of namespaces and templates for embedded programming seems arbitrary at best, since they are both completely compile-time features


IOKit predates anything resembling wide adoption of those C++ features. Exceptions are a particularly thorny subject when mixed with multithreading and (non-existent) memory model. And not to mention RTTI, which was always just a patch job.


I think choosing to exclude exceptions is a totally fine decision, and I actually think it's the right thing to do in embedded systems where you _cant't_ afford to let an uncaught exception bubble up


My guess would be: banning of namespaces and templates greatly simplifies name resolution and so on. For instance, look at https://en.cppreference.com/w/cpp/language/lookup and then remove namepsaces and templates from the picture.

Or maybe they just wanted developers to be "less creative".

-ss


I don't know what their reasoning was, but it might have not been technical one.

Embedded development positions are not your typical developer positions, they require more intimate knowledge of system, of possible states and transitions, and of hardware internals. This results in less focus on actual coding skills.

Even if you are highly skilled, you won't be doing your company a favor if you're writing code that only 5% people can understand and contribute to. My understanding is that every team that writes C++ will restrict it to some subset to make the codebase more manageable.

OTOH I think namespaces can only lead to better and more modular architecture, I'm not sure where they should be avoided.


That would make sense if it wasn't for the fact that many embedded systems are programmed in assembly language, which is not the most easy to follow for people not acquainted with code.


I haven't heard of any professional project using assembly in last couple of years. Maybe bits and pieces, like startup code to set up C/C++ environment, things that a single person can write and maintain. It doesn't work at all for large teams.


If you're a hardware guy, assembly is a lot more intelligible than many of the things you can do in C++...


a reason for banning namespaces is that they didn't want to add namespace support to their stripped-down RTTI solution and a reason for banning templates would likely be code-size concerns.

I mean - I'm just guessing here, but I can at least see some technical reasons for these decisions.


I was talking about their custom runtime system that provides them type information, as well as generic container types and other features.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: