Hacker News new | past | comments | ask | show | jobs | submit login

That's not true.

Unikernels are a way to do the same thing you do with your "process abstraction", but better: using less resources, minimizing the number of failure points, improving speed of deployment, minimizing dependencies which also allows for faster iteration when developing software.

The reason they didn't catch on is because overwhelming majority of programmers aren't inventors and aren't knowledgeable enough about the system aspect of their environment -- they need tools, frameworks, bundles of documentation and support forums to get anywhere.

Now, all of the aids listed above can either be created by a dedicated group of enthusiasts or by large corporations investing into such a process. Large corporations have little interest in software qualities mentioned above because the "good enough" containers are capable of solving their business problems. So... we are left with enthusiasts, who are few and whose work isn't being integrated into "mainstream" development process.

Will these enthusiasts eventually gain enough momentum to make it really attractive for the whole industry to switch, or with it be another IPv4 vs IPv6 -- time will show.




In what way does a hypervisor and unikernels use less resources, reduce the number of failure points, improve deployment speed or reduce dependencies compared to an operating system (no hypervisor) and processes?


If you are comparing a bare metal system, indeed you might not see a lot of benefits.

However, there is a basic assumption that these workloads are running in the cloud which means they are already virtualized and then when you compare against a ubuntu/debian/etc. vm they get their benefits.


There are cloud systems that run your app in a Docker container on a Linux host with no hypervisor.


If there is no hypervisor involved we would consider that bare metal regardless of how automated the process of provisioning the system is (eg: packet.net, hetzner).

However, if we use your example then one is still managing the install process of the base system, updating it, patching it, securing it, managing application deployment on top, networking, etc.)

At the end of the day though that is still a layer below.


and unikernels also have a layer below because they are always run on hypervisors. Nobody is talking about true bare metal here, and you don't want to write your applications on true bare metal most of the time anyway. That's reserved for the situations where you really need to squeeze 110% out of your hardware, which will never be upgraded. A lot of older console games were written on true bare metal for this reason - I remember that on the Nintendo DS you could set up a DMA controller to copy a command buffer from main memory into the graphics processor's command FIFO, and another one to transfer sound samples, and there was a whole set of registers to select which VRAM bank was allocated to which purpose and whether the main graphics processor would be allocated to the top or bottom screen with the other one getting the secondary graphics processor - but web app servers are not in this situation.


Let's make it more concrete and talk about Linux, instead of talking about hypervisor and OS in abstract.

So, Linux comes with a lot of legacy. Just take the whole of bootloader, then the whole multi-stage boot with ramfs... you don't need any of that in a unikernel designed to run on a known hypervisor. QEMU can boot Linux VM skipping the bootloader process, but not skipping the ramfs part. But, really, you don't need any of that. Even worse, having to debug problems that happen before pivot is a huge pain.

Second, Linux keeps adding more modules directly to the kernel (rather than making them dynamically loaded). Not so long ago the kernel adopted RAID modules, for example. Similarly, iirc, bond, vLAN, bridge interface modules are part of some / all (?) kernels today. In other words, the kernel is growing bigger and most of the things they add there are not relevant to you. It becomes more Microsoft-World-like, where no feature works really well, but the sum total of all features beats every individual better-quality editor especially if you don't really know what are you going to use it for.

Beside modules, Linux adds just more configuration to its internal functionality: did you know you can configure I/O scheduler to work in different modes? Do you even know what they are? I'm pretty sure the answer is "no" and "no", and, in most likelihood for your application that would make no difference. Do you need multiple memory overcommit modes in your application? -- of course not. Do you need interfaces into userspace like udev, procfs, sysfs etc. in your application? -- of course not, but these are an integral part of Linux.

These and many other things not mentioned here are all potential points of failure. You get a disk with an incompletely deleted RAID, and while you may not even know that you had RAID drivers, depending on bootloader configuration you've never looked at, suddently, during ramfs stage RAID module kicks in and starts a RAID rebuild... now imagine the fun of dealing with that.

---

Resources: well, the drivers Linux loads take space... both on disk and in your memory. All the work Linux does to keep its pieces running -- most of it is completely irrelevant to you. If you ever have a glimpse of your Linux booting, you could probably see it mentioning something about rfkill... well, no matter if you didn't. I promise you, it's there. So, your Linux initialized some mechanism for... drummroll... Bluetooth? -- How cool is that? Right? Your server sitting in the datacenter still tries to use Bluetooth for who knows what reason. Until not so long ago Lunux had floppy drivers built into kernel.

This stuff adds up. It takes memory. It makes it necessary for VMs to be created with some kind of input device. Did you know that your input device on QEMU is a... tablet? For some god-knows-why hacky reason it's a tablet. So, you also need tablet drivers. And, because you don't know what keyboard is going to be connected, you also need a bunch of keyboard layouts and so on.

Another thing... Linux is in constant flux. It's always between removing something old and adding something new. You always have at least two ways of doing something, often times more. This uses more space, more time to boot. Eg. /etc/fstab is, generally, obsolete, but its support isn't going anywhere. So, you can specify your devices as systemd modules, or you can rely on systemd to parse /etc/fstab and create those modules for you. And, no, you cannot prevent systemd from trying to read /etc/fstab and you cannot delete the code responsible for reading and transforming it.

---

Development speed: Linux is a big project, made up of many smaller ones. There are a lot of internal dependencies. Suppose Linux added LUKS2 driver (that's disk encryption), but the dmcrypt project is falling behind the schedule in providing interface to this driver (this was actually the case in SLES). Well, all the cool kids now can use LUKS2, but you got a package deal, and you are stuck with LUKS1 because the package you probably don't even care about failed to make it on time to a release deadline.

In practice, today, if you use one of the most popular Linux distributions, you are many versions behind the latest stable kernel release, but you are also many versions behind the latest stable everything. This is because integration takes time, and the more there is to integrate the longer it takes. Just as an example: latest Ubuntu LTS uses kernel 5.15, but latest stable kernel is 6.3.8 as of the time of this writing. That's years of development.


But you're not going to implement all of this in your unikernel, are you? If you want to run your unikernel on RAID you're going to need a hypervisor with a RAID driver built in or loaded at runtime. If you want it encrypted you're going to ask the hypervisor to encrypt it. And how do you want to schedule I/O between different unikernels? And do you think the interfaces will remain stable for all time, or will there be transitional periods where two interfaces are supported at the same time? You see, all of this stuff is still needed.

This attempt to escape the fundamental requirements of operating systems by calling them something else reminds me of... well, nothing in particular, but a lot of projects started by wide-eyed visionaries. They say "we're going to solve the problems of X by making Y which isn't X" and then end up re-creating a really shitty version of X by ignoring all the lessons learned from making X. Consider NoSQL, blockchain, the inner platform effect, or Joel Spolsky's essay on rewrites.


I think in correlation to your other comment is that there is the notion that all of this is done for you via the cloud thus you don't really need/want to do it yourself.

If you do wish to deal with this yourself yes you need something but that is not the goal here. The goal is to deploy something under the assumption that a provider (eg: the cloud) is already doing this for you.


Setting up a hypervisor with all the operating facilities that you didn't build into your unikernel isn't easier than setting up an operating system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: