On the x86 side, Xen has two unique advantages over KVM in terms of security.
The first is the mature XenProject Security Response process [1]. All known Xen-related security issues, even DoSes, are announced and documented, so it's easy to find out when you need to patch your software. If you're a cloud provider, or make software with Xen as a component, you can be notified under embargo before the public announcement, so you can have your cloud patched / have a software patch tested and ready to download on the day the announcement goes public.
KVM doesn't have an equivalent process. Many high-profile KVM issues are issued under embargo on a mailing list, but 1) many are not 2) only distros are allowed to be on the list.
The second thing on the x86 side are some additional defense-in-depth security measures, including driver domains and device model stub domains. Driver domains allow you to run device drivers in a completely separate VM; so (for instance) a privilege escalation in iptables would allow an attacker only to control the bridge and the network device, as opposed to being able to take over the whole system. Similarly, device model stubdomains run QEMU (or the emulator of your choice) in a separate VM; which means if there's a privilege escalation bug in QEMU, you've just broken into Yet Another VM; whereas in KVM you're now inside a Linux host process. KVM processes inside Linux can be restricted with things like SVirt, but it's fundamentally more difficult to isolate a process than a VM.
These are some of the reasons why QubesOS [2] and OpenXT [3] both rely on Xen.
On the embedded side, the distinctive that Xen has over KVM is that it's a microkernel-style hypervisor. This leads to a couple of advantages.
First, Xen itself boots in less than a second, and using the "dom0less" feature, can direct-boot any number of other domains from the same initrd [4]. This means that if none of your VMs are Linux, you don't need to run Linux at all -- you can boot up all of your VMs and have them up and running in hundreds of milliseconds; or, if you need a single VM up and running quickly, you can start that one along with dom0, and start your other ones from dom0.
Secondly, Xen is small enough to be safety certified. This is possible for a microkernel-style hypervisor like Xen, particularly with the "dom0less" direct-boot feature, in a way that would be impossible for KVM, since you'd have to not only certify enough of the Linux kernel, but all of the userspace which is running to start your other VMs.
This is why Xen has been making significant inroads into the embedded space. It's been put on rockets [5], and was chosen by ARM to be part of their Automotive Reference Platform [6].
If you just want to run the occasional VM on your x86 desktop, then KVM is likely to be a better bet: There won't be a significant performance difference, and it's less effort to set up.
But if you're making a product in which you want to embed virtualization, Xen has a lot of advantages. This to me is actually why the Xen for RPi is so interesting: Because actually 44% of RPi sales are actually for industrial use cases, and this port expands the market both for Xen and RPi.
Obviously there are lots of other strengths and weaknesses, but that should give you an idea.
While you make good points about the potential defense-in-depth aspects of Xen, the reality is that features like stub domains and dom0less have been very experimental and aren't enabled by default or used in production. They were first introduced about 10 years ago and are still tricky to set up. Porting and testing device drivers in stubdomains is also hard (from my own experience).
Svirt for KVM on the other hand is enabled by default on Red Hat like distros, providing a reasonable MAC policy and locked down device model without any requirement for the admin to do something. I like Xen's design but I can't help but feel that safer defaults trump theoretical configurations.
Did you mean "stub domains and driver domains"? I'm pretty sure dom0less was used in production before it even made it into the main Xen tree. :-)
As I said, using Xen from a plain vanilla distro is more difficult to set up. There are a couple of reasons for this; one of them being simply that RedHat's main product is itself a distro, and so the setup of their product translates over directly into making it easy to be set up within other distros.
The companies with products shipping Xen, on the other hand, primarily ship fully-integrated products. Citrix Hypervisor (was XenServer) and XCP-ng are "virtualization appliances" in which everything is integrated. OpenXT and QubesOS are the same way. Citrix Hypervisor / XCP-ng don't use driver domains, but they do have custom QEMU deprivileging for device models. OpenXT and QubesOS do (as I understand it). If you install one of these, you get a secure setup by default.
Fundamentally, none of these organizations would benefit directly from making it easier to set up driver domains on plain vanilla distros; and so making it easier to use driver domains on a plain vanilla distro never gets to the top of their engineers' priority queue. If you're interested in using Xen on a server fleet, I would definitely recommend going with XCP-ng or Citrix Hypervisor; if you want a secure desktop, definitely go with QubesOS or OpenXT. On the other hand, if having your fleet / desktop based on a vanilla Linux distro is a priority for you, then KVM might be a better bet at the moment.
Thanks for the interesting reply. These are fair points. And I hadn't considered the Citrix products. If you read this then I do have another question. Is there way to disable PV guest support in Xen? IIRC this code represented a good portion of Xen's CVE list. It is an attack surface that we no longer need on modern CPUs. In theory you can just never run a PV guest but the code would still be present?
TLDR: Yes, you can disable PV entirely on x86 systems; on ARM systems, there never was PV.
On x86, as you say, has "classic" Xen PV, which doesn't require virtualization extensions. It also has "HVM", which includes a full system emulation (motherboard, etc); but also "PVH", which is basically what PV would be if it were designed today: It takes advantage of hardware support when it makes sense, and paravirtualizes when it makes sense. It doesn't require a devicemodel to be running at all, but also isn't susceptible to the PV XSAs. There's also a mode called "shim" mode, which allows you to run "classic PV" kernels in PVH mode.
Xen now has KConfig, and you can now disable PV mode entirely when building Xen. You can run dom0 in PVH mode, and then run all of your guests either in PVH mode or HVM mode.
The ARM port was made after ARM had virtualization extensions; so it never had a "classic PV"; nor does it require any devicemodel whatsoever. There's only one guest mode, which corresponds roughly to the "PVH" mode above.
On the x86 side, Xen has two unique advantages over KVM in terms of security.
The first is the mature XenProject Security Response process [1]. All known Xen-related security issues, even DoSes, are announced and documented, so it's easy to find out when you need to patch your software. If you're a cloud provider, or make software with Xen as a component, you can be notified under embargo before the public announcement, so you can have your cloud patched / have a software patch tested and ready to download on the day the announcement goes public.
KVM doesn't have an equivalent process. Many high-profile KVM issues are issued under embargo on a mailing list, but 1) many are not 2) only distros are allowed to be on the list.
The second thing on the x86 side are some additional defense-in-depth security measures, including driver domains and device model stub domains. Driver domains allow you to run device drivers in a completely separate VM; so (for instance) a privilege escalation in iptables would allow an attacker only to control the bridge and the network device, as opposed to being able to take over the whole system. Similarly, device model stubdomains run QEMU (or the emulator of your choice) in a separate VM; which means if there's a privilege escalation bug in QEMU, you've just broken into Yet Another VM; whereas in KVM you're now inside a Linux host process. KVM processes inside Linux can be restricted with things like SVirt, but it's fundamentally more difficult to isolate a process than a VM.
These are some of the reasons why QubesOS [2] and OpenXT [3] both rely on Xen.
On the embedded side, the distinctive that Xen has over KVM is that it's a microkernel-style hypervisor. This leads to a couple of advantages.
First, Xen itself boots in less than a second, and using the "dom0less" feature, can direct-boot any number of other domains from the same initrd [4]. This means that if none of your VMs are Linux, you don't need to run Linux at all -- you can boot up all of your VMs and have them up and running in hundreds of milliseconds; or, if you need a single VM up and running quickly, you can start that one along with dom0, and start your other ones from dom0.
Secondly, Xen is small enough to be safety certified. This is possible for a microkernel-style hypervisor like Xen, particularly with the "dom0less" direct-boot feature, in a way that would be impossible for KVM, since you'd have to not only certify enough of the Linux kernel, but all of the userspace which is running to start your other VMs.
This is why Xen has been making significant inroads into the embedded space. It's been put on rockets [5], and was chosen by ARM to be part of their Automotive Reference Platform [6].
If you just want to run the occasional VM on your x86 desktop, then KVM is likely to be a better bet: There won't be a significant performance difference, and it's less effort to set up.
But if you're making a product in which you want to embed virtualization, Xen has a lot of advantages. This to me is actually why the Xen for RPi is so interesting: Because actually 44% of RPi sales are actually for industrial use cases, and this port expands the market both for Xen and RPi.
Obviously there are lots of other strengths and weaknesses, but that should give you an idea.
[1] https://xenproject.org/developers/security-policy/ [2] https://www.qubes-os.org/ [3] https://openxt.org/ [4] https://xenproject.org/2019/12/16/true-static-partitioning-w... [5] https://www.embedded-computing.com/guest-blogs/the-final-fro... [6] https://www.youtube.com/watch?v=boh4nqPAk50