Hacker News new | past | comments | ask | show | jobs | submit login
Zerg demo – Xen instance spawned for each web request (erlangonxen.org)
113 points by k33l0r on Feb 19, 2013 | hide | past | favorite | 36 comments



This is really neat stuff. I do not understand though why LING is not open-source already. If you are sincere in your claims for intentions of making it so, why would you wait until it is "mature" before doing so? Wouldn't it mature a lot faster if more people picked it up and tried to use it, submitted defects and even patches?


I'm afraid, only few people are familiar with internals of Erlang VM, especially, if that VM is totally different from BEAM. Thus, there's no reason to opensource it, at least yet. Though, build.erlangonxen.org is open, contains the most recent stable version, and free to use.


You know, the idea sure is impressive, but getting over capacity errors sure doesn't show it well :)


On the contrary I like it. They have finite resources and it answers immediately with the correct response.


This. With typical web applications, your request would wait until something times out and cascades back to the client (webserver, app server, db, etc.). This can mean waiting for 30 seconds or more depending on your configuration.

It's not an easy problem to solve elegantly too.


It actually is quite easy if you set a finite limit on connections and show an appropriate message to those connections you're actively refusing. If you're allowing as many connections as your server/network/database can allow, it might take a bit longer to determine a failure status.


The demo about the vm+server startup time is interesting, but I am glad to know about the way they use Nginx to proxy the request twice to do the provisioning of the vm+server.

Edit: I wonder if the numbers would be similar if the example was done with HaLVM (https://github.com/GaloisInc/HaLVM).


Application which handles original request (called 'spawner') asks Xen to start a new server, and when Xen reports back, returns 'X-Accel-Redirect' to nginx, which then nginx serves.


For people who are interested but unable to get a connection, here's a saved page: http://f.cl.ly/items/07293W450C3F0r472u1O/Zerg%20demo%20-%20...

I tweaked the background image URLs so they'd be absolute instead of relative, but otherwise it's unmodified.


Thanks. We used libvirt for this demo, unfortunately it sets its own limits.


I'm seeing 1.5 to 1.6 secs. Is that due to my crappy internet connection or is that simply what it is?

Because 1.5 secs per request, and more for somewhat more complex operations, seems long to me. As instance startup time it is impressive though! But they are designed to die after each request, right? Or is that just this example?


I got 3 seconds.

But that isn't 3 seconds to serve the page. That's 3 seconds to basically build and boot the "server" on-demand, start all necessary daemons and serve the page.

Now think about what happened when you needed to add a Web server 10 years ago.


It is just an example to show how quickly they can provision a new vm+server. They are not advocating for it to be a good way to serve requests.


libvirt is the limit in this case...


Right, it's just example, right, it dies after each request - you can check the code on github. Also, you can review the detailed structure of time spending on the second half of zerg's page.


one thing I haven't been able to wrap my head around is why "on xen"? couldn't this be done "on qemu" as well, or even 'more betterly'?


Current implementation runs in Xen PV domain and uses Xen's hypercalls. There are no strong dependencies, though, Ling easily can be ported for other hypervisors. Xen is the priority, since it's the plaform for all major public clouds.


Can someone shine some light on why and how this is useful? What are some examples?


If you've ever had to scale capacity on AWS (or any other on-demand cloud service, really) you'd know. On the page they say 300 seconds to bring up a Linux instance, but really that can even take longer from time to time.

When you need to scale in a hurry (or in this case semi-automatically), the ability to launch persistent or temporary VM instances in a much shorter timeframe is critical.

Granted (and as they note here) this is sort of demo-only, and probably not particularly useful out of the box. But as a demonstration of how fast things could work it's perfect. It doesn't hurt that it's functional.


Scalable MapReduce, for example, super scalable web servers tolerant for any possible spikes of load, including abuse-resistant hostings, personal virtual appliances... there are many possible usecases, indeed


The folks at ZeroVM have a list of motivations[1] for this type of execution and hosting model. The only problem is that it's hard to tell how active the project is from their main page alone.

EDIT: Their github commit is fairly active for a solo developer[2].

[1] http://zerovm.org/motivation/

[2] https://github.com/zerovm/zerovm/commits/master


we are heads down coding and haven't got time yet to update ZeroVM website. If you want to contribute contact us now please, if you to use ZeroVM hypervisor stay tuned...


So what's being shown off here? Clearly it's impressive that a web server VM is loaded so quickly, but what's the key? Is it a carefully configured Xen VM? Or is it the use of Erlang?


This is an OS-less web server VM, basically an modified Erlang VM to run directly on "almost" bare-metal


Actually, we didn't modify Ericsson's Erlang VM, we have written our own.


Impressive, is that what Zerg is about? where can I find more information about this?



Zerg is just a demo that shows how elastic and scalable could be OSless VMs even over current cloud infrastructure


If you run Perl in CGI it creates a new VM for every single request. How is this different?


See http://erlangonxen.org/. This means VM in terms of an AWS instance :)


CGI creates a new OS process for every request. Zerg creates a new Xen VM for every request.

These are very different things.


It keeps saying "Over capacity", I want to know what this is about though.


About being able to quickly scale to capacity. :)


Sorry, gents, we don't have real datacenter there, just a single host, and libvirt sets its own limitations.


zerg rush is imba (get the joke? ha... sorry)


Borked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: