This is really neat stuff. I do not understand though why LING is not open-source already. If you are sincere in your claims for intentions of making it so, why would you wait until it is "mature" before doing so? Wouldn't it mature a lot faster if more people picked it up and tried to use it, submitted defects and even patches?
I'm afraid, only few people are familiar with internals of Erlang VM, especially, if that VM is totally different from BEAM. Thus, there's no reason to opensource it, at least yet. Though, build.erlangonxen.org is open, contains the most recent stable version, and free to use.
This. With typical web applications, your request would wait until something times out and cascades back to the client (webserver, app server, db, etc.). This can mean waiting for 30 seconds or more depending on your configuration.
It actually is quite easy if you set a finite limit on connections and show an appropriate message to those connections you're actively refusing. If you're allowing as many connections as your server/network/database can allow, it might take a bit longer to determine a failure status.
The demo about the vm+server startup time is interesting, but I am glad to know about the way they use Nginx to proxy the request twice to do the provisioning of the vm+server.
Application which handles original request (called 'spawner') asks Xen to start a new server, and when Xen reports back, returns 'X-Accel-Redirect' to nginx, which then nginx serves.
I'm seeing 1.5 to 1.6 secs. Is that due to my crappy internet connection or is that simply what it is?
Because 1.5 secs per request, and more for somewhat more complex operations, seems long to me. As instance startup time it is impressive though! But they are designed to die after each request, right? Or is that just this example?
But that isn't 3 seconds to serve the page. That's 3 seconds to basically build and boot the "server" on-demand, start all necessary daemons and serve the page.
Now think about what happened when you needed to add a Web server 10 years ago.
Right, it's just example, right, it dies after each request - you can check the code on github. Also, you can review the detailed structure of time spending on the second half of zerg's page.
Current implementation runs in Xen PV domain and uses Xen's hypercalls. There are no strong dependencies, though, Ling easily can be ported for other hypervisors. Xen is the priority, since it's the plaform for all major public clouds.
If you've ever had to scale capacity on AWS (or any other on-demand cloud service, really) you'd know. On the page they say 300 seconds to bring up a Linux instance, but really that can even take longer from time to time.
When you need to scale in a hurry (or in this case semi-automatically), the ability to launch persistent or temporary VM instances in a much shorter timeframe is critical.
Granted (and as they note here) this is sort of demo-only, and probably not particularly useful out of the box. But as a demonstration of how fast things could work it's perfect. It doesn't hurt that it's functional.
Scalable MapReduce, for example, super scalable web servers tolerant for any possible spikes of load, including abuse-resistant hostings, personal virtual appliances... there are many possible usecases, indeed
The folks at ZeroVM have a list of motivations[1] for this type of execution and hosting model. The only problem is that it's hard to tell how active the project is from their main page alone.
EDIT: Their github commit is fairly active for a solo developer[2].
we are heads down coding and haven't got time yet to update ZeroVM website. If you want to contribute contact us now please, if you to use ZeroVM hypervisor stay tuned...
So what's being shown off here? Clearly it's impressive that a web server VM is loaded so quickly, but what's the key? Is it a carefully configured Xen VM? Or is it the use of Erlang?