The following real life scenario below portrays an administrative API layer that runs in EC2, and how it compares to what the raw hardware costs would be in Lambda. It purposefully excludes anything to do with requests (CloudFront+ELB for EC2 and API Gateway for Labmda).
EC2 scenario:
2 c3.large instances, general purpose 15GB SSD (need two for HA, span AZs)
1 Yr All upfront reserved pricing on both instances
Lambda scenario:
1042 memory size
Workload running on EC2:
16000 requests a day to an endpt. 200 ms ave = 3200000 ms total
3200000 / 100 ms = 32000 segments of 100 ms
EC2 Cost:
1084(yr)/365 = $2.96986301369863/day
Lambda Cost:
32000 * 0.000001667 = $0.053344/day
The cost difference is so large because this workload is not a very high EC2 utilization scenario. We need extra servers for multi AZ HA and to handle bursts. Lambda handles both of these for you.
This also does not include the huge costs savings of no server management (security patches etc) that go into EC2 or Docker containers.
Using two c3.large instances to serve 16,000 requests a day (~11 requests a minute) is akin to using a muscle car to commute a quarter mile - you're vastly over-provisioned for such a simple workload.
This kind of workload could easily be handled by a pair of t2.micro instances. This would cut your costs for dedicated servers to around $0.40 a day.
Plus, you could then take advantage of persistent in-process memory caches and warm services, helping to speed up your responses even further.
Using tools like Lambda come with a number of advantages, but you give up a lot of control at the same time. Perhaps this isn't a problem for most side projects, but for business critical software I'd be very leery of handing that much control over to someone as uncommunicative as Amazon.
"The cost difference is so large because this workload is not a very high EC2 utilization scenario. We need extra servers for multi AZ HA and to handle bursts."
The $0.40 cost per day I calculated includes multi-AZ HA, and your bursts would have to be fairly significant to overwhelm two t2 instances which are accumulating CPU credits 95% of the time - say around 600 requests per minute for a few hours.
Of course, this is all conjecture since I can't actually view your workloads, but we've been performing this exercise for our own services, and the t2 series of servers is remarkably capable: especially given their cost. The original administrators thought (or were lead to believe) that we really needed the raw horsepower that c*.large+ instances offer... and they didn't in most cases.
Routers, crud API wrappers, even some disk persistance applications all qualify as "non-CPU intensive".
A neat idea, but the resultant vendor lock-in here worries me. I've heard horror stories of the amount of effort required to move away from PaaS platforms like Heroku (I believe Genius is one such tale) due to architecture-specific components like jobs, but this seems to take that reliance to a whole new, all-inconclusive level.
This might be neat or a quick weekend or hackathon project where you just want to Get Shit Done, but I can't imagine anyone ever committing fully to the platform and having no second thoughts.
An open architecture built on this sort of idea would be nifty, but tools like Docker have almost reduced the sysadmin components for a lot of simple projects to something that's not too far from this anyhow, from an ease of use perspective.
I understand this concern. This is where JAWS comes in. The best way to do Lambda development, is to make AWS Lambda a thin wrapper around your own separate code, to keep that code re-usable, testable and AWS independent.
JAWS generates scaffolding to encourage this for you. As a result, your code ends up looking just like a traditional application framework's code.
I just did a talk on JAWS @ Re:Invent to over 600 people. The line was out the door. Honestly, I didn't hear "I'm only going to use this for a hackathon project" once, except for now. Instead, all I heard was, "OMG we don't have to deal with servers!!! We will use this for everything!!!". And there were huge enterprise companies there.
I'm a Docker lover, but Lambda has a huge head start in many areas. Super fast spin-up times, orchestration handled for you, the ability to containerize functions/endpoints not just applications, and pay per use pricing. All of this comes with Lambda out-of-the-box.
Lambda functions are just small Node scripts wrapped up in a Docker container and then executed on a custom scheduler. I don't think it would be too difficult to port to another platform.
By "Serverless" we mean the developer does not have to think about servers. They exist, but Amazon manages them.
Instead, the developers deals only with Lambda, an event-driven compute resource. You upload your code and it runs when triggered, scaling horizontally and massively, out-of-the-box.
The workflow rocks because it's endpoint/function isolation, not just application isolation. Every piece of logic is in its own container. Best of all, you only get charged when that code is run (!!!).
The following real life scenario below portrays an administrative API layer that runs in EC2, and how it compares to what the raw hardware costs would be in Lambda. It purposefully excludes anything to do with requests (CloudFront+ELB for EC2 and API Gateway for Labmda).
EC2 scenario: 2 c3.large instances, general purpose 15GB SSD (need two for HA, span AZs) 1 Yr All upfront reserved pricing on both instances
Lambda scenario: 1042 memory size Workload running on EC2: 16000 requests a day to an endpt. 200 ms ave = 3200000 ms total
3200000 / 100 ms = 32000 segments of 100 ms
EC2 Cost: 1084(yr)/365 = $2.96986301369863/day
Lambda Cost: 32000 * 0.000001667 = $0.053344/day
The cost difference is so large because this workload is not a very high EC2 utilization scenario. We need extra servers for multi AZ HA and to handle bursts. Lambda handles both of these for you.
This also does not include the huge costs savings of no server management (security patches etc) that go into EC2 or Docker containers.