Hacker News new | past | comments | ask | show | jobs | submit login

Xen is a great technology, and the community has accomplished much in terms of features and performance improvements. But I doubt the long term future is Containers/Docker inside Xen guests, instead of just bare-metal.

Reasons to pick bare-metal:

1. Price/performance. For many CPU-bound workloads the Xen overheads are negligible, but for I/O-bound workloads the overheads add up. The network code path, especially at 10 GbE speeds, is sensitive to any wasted cycles. For large sites with thousands of instances, these overheads mean less efficiency, and needing to run more instances, costing more for the business.

Can the future of Xen eliminate (or near eliminate) I/O overheads, so that their resource cost is near bare-metal? Put differently: will there be a future where all I/O paths in AWS EC2 access PCI passthrough, or some similar technology to bypass synchronous overheads? That would be great, but is this realistic? Bare-metal provides this level of performance today.

2. Debugability. Many environments are sensitive to latency outliers, and measure the 95th and higher latency percentiles. Root causing these is difficult on hardware virtualized environments, where latencies end at the hypercalls, but easier for bare-metal environments. Even with root access on both the host and guests, tracing I/O between them can be painstaking work, as the host can't see and trace inside the guest. (Is the Xen going to solve this problem? Eg, could I have a SystemTap script trace both guest application-level I/O requests and associate them with host device level I/O? I suppose it can be done, but will it?) On bare-metal with containers, the host can see all, and trace I/O seamlessly from guest processes to bare-metal devices. Currently.

3. Density or efficiency. Resources can be shared at a finer level of granularity with cgroups, than is really practical with virtualized devices such as CPUs. Unless you are only running one big Xen guest on a host.

Xen (and KVM) are here to stay: there will always be the need for different kernel support for legacy applications. But for very large sites with thousands of instances, the future (although it may take a while) for sites with many instances and are performance sensitive, is likely bare-metal + Containers/Docker.

If you're a small site, or you care less about performance and debugging, then there may well be merit in the work this article covers.




> will there be a future where all I/O paths in AWS EC2 access PCI passthrough

That's a business, not technology question. Possible today if I/O is priced for exclusivity to one workload, same as a dedicated server. Most AWS customers want shared workload (low) pricing.

> Debugability

In general, hypervisors provide more debug options than a single OS, since the host OS can introspect into the guest OS, even to record/replay guest OS execution, e.g. http://www.slideshare.net/mobile/xen_com_mgr/xentt-determini...

Real-time hypervisor scheduling is an ongoing research area, e.g. here is an automotive use case, http://blog.xen.org/index.php/2013/11/27/rt-xen-real-time-vi... & http://www.cse.wustl.edu/~lu/papers/emsoft11.pdf

> for very large sites with thousands of instances

Yes, containers work well for Google who doesn't need to worry about isolation. Most businesses are not Google nor are they running a real-time workload. With containers, SE Linux provides a necessary layer of defense, http://opensource.com/business/14/7/docker-security-selinux & http://blog.docker.com/2014/07/new-dockercon-video-docker-se...


Have you considered using Ubuntu MAAS?


He worked for Joyent, so I'm sure he's considered automated bare-metal deployment tools quite a bit. ;-)

BTW, I tried MaaS recently and it seems to work fine but I could not successfully customize the installed OS which made it impossible to integrate into my environment.


I might not have been clear: I'd want bare-metal as the basis for running Containers or Docker. The Xen article described the value of using a Xen guest as the basis.

Ubuntu MAAS could be part of that, but I'd want to be managing containers, not metal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: