I'm not quite sure what you mean, but if you target the cloud as the platform (as is stated in the article), then you always have a virtualization layer, even if you provision on something like AWS Bare Metal Instances.
(The reason bare metal instances use virtualization is not obvious: It's so you don't try to reflash the firmware on devices for a persistent attack.)
I'm thinking of Amdahl's law. Any effort to disaggregate OS and kernel for performance capped by the performance of the virtualization layer (about which I know to little to get any intuition).
Virtualization adds a constant overhead to various I/O operations, usually reckoned to be 2-5%, and nothing to CPU bound processes since the CPU just executes userspace instructions as normal. For AWS the overhead will be less since they use a specialized partitioning hypervisor and a lot of custom hardware assistance, including paravirt I/O devices implemented directly in hardware. This small overhead is almost always a good trade-off for the convenience of virt / cloud, such as easy provisioning, live migration, hardware independence and so on. The DBOS decision to only target the cloud makes lots of sense.