Hacker News new | past | comments | ask | show | jobs | submit login

You write a long wall of text that does boring to dispute what I wrote.

The cheapest DO droplet is 4 dollars a month. It is more than capable of running "10 scripts", and will last your startup with three users indefinitely long.

If you're concerned about that cost, lambdas will not save you




Maybe we write very different kinds of scripts. For my startup, such a "script" might be:

1. An hourly itemized invoicing batch job for incremental usage-based billing, that pulls billable users from an ERP DB, grabs their usage with a complex Snowflake query (= where our structured access logs get piped), then does a CQRS reduction over the credit-and-spend state from each user's existing invoice line-items for the month, to turn the discovered usage into new line-items to insert back into the same ERP DB.

2. A Twilio customer-service IVR webhook backend, that uses local speech-processing models to recognize keywords so you don't have to punch numbers.

3. An image/SVG/video thumbnailer that gets called when a new source asset (from a web-scraping agent) is dropped into one bucket; and which writes the thumbnail for said asset out into another bucket. (For the SVG use-case especially, this requires spinning up an entire headless Chrome context per image, mostly in order to get the fonts looking right. We actually handle that part of the pipeline with serverless containers, not serverless functions, because it needs a custom runtime environment.)

#1 is an example of a low-utilization "script" where we just don't want to pay for the infra required to run it when it's not running, since when it is running it "wants" better-than-cheapest resourcing to finish in a reasonable time (which, if you were deploying it to a VM, would mean paying for a more expensive VM, to sit mostly idle.) We do have a k8s cluster — and this was originally a CronJob resource on that, which made sense at first — but this is an example of a "grows over time" workload, and we're trying to avoid using k8s for those sort of workloads, because k8s expects workloads to have fixed resource quotas per pod, and can't cope well with growing workloads without a lot of node-pool finessing.

#2 and #3 are high-utilization (one in CPU and RAM, one in requiring a GPU with VRAM) "scripts", where only one or two concurrent executions of these would fit on a $4/mo DO droplet; and where ten or twenty of these on a more vertically-scaled VM or machine would start to strain the network throughput of even a 2.5Gbps link. Many cheap-ish machines, each with their resourcing (including their own network bandwidth), all instantaneously reserved and released at one machine per request is a perfect match for the demand profile of these use-cases.


I still don't see a need for lambdas or how a cost to run them on a DO droplet/Hetzner server would be so prohibitive that you'd be concerned about saving a few dollars a month.

Note: There's a reason I keep saying "You need 'omg distributed' perhaps after your millionth user, and even then it's highly debatable."


I think you misunderstood what I said above, because I wasn't talking about what you as "a person who wants to run scripts" needs; I was rather talking about what the team managing the infrastructure for these people needs. Serverless functions are great for the people managing a multitenant cloud-function compute cluster, because the architecture is nearly-stateless and can be easily scaled.

But these properties matter not one bit to most users deploying serverless functions. Most users don't need a "distributed" function. The advantages that users see, come from the set of key service objectives that the IaaS's DevOps team can deliver on because they're able to scale and maintain the service so easily/cheaply/well.

Think of an customer-employer-employee relationship. A plumbing company doesn't buy you-their-employee a company car because they want to depend directly on you having a car (e.g. by adding "chauffeuring" to your job duties.) They buy you a company car because it enables you to get to job sites faster and more reliably; to keep all your equipment securely in the truck ready to go rather than having to load/unload it from your regular family SUV when you get a call; to bring along more, heavier equipment that would be impossible to load up on short notice; etc. In short, it enables you to do your job better — which in turn enables the company to deliver the service they market to customers better, and probably cheaper.

Choosing a "CGI-bin server host" because you know it's built on a distributed substrate, is like picking a plumbing company because you know all their employees roll out with nice, well-equipped company vans. The plumbing companies without those vans could still do the job... but the van, with a bevy of equipment all well-organized on wall-hooks and shelving units, makes the person who comes to your call-out more well-equipped to help you. Serverless function (= distributed CGI-bin) hosts, are more well-equipped to host functions.

---

My key assumption — that I maybe left too implicit here? — is that in general, all else being equal, people who "just want to deploy something" (= don't need to glue a whole lot of components together into a web of architecture), should prefer using "managed shared-multitenant infrastructure" (i.e. paying for usage on "someone else's server", without any capacity-based resource reservations) over paying to reserve capacity in the form of a bare-metal machine, VM, or PaaS workload.

(Specifically, people should prefer to use standard FOSS frameworks that ship adapters for different IaaS solutions — e.g. https://www.serverless.com/ in the FaaS case — to enable the use of any arbitrary "managed shared-multitenant infrastructure", without vendor lock-in.)

Due to many simultaneous economies of scale involved — in hardware, in automation, in architecture, in labor, etc — "managed shared-multitenant infrastructure" almost always has these benefits for the user:

1. more reliable / lower maintenance

2. cheaper (often free for hobbyist-level usage!)

3. higher potential performance for the same price

For example, managing a few MBs of files through an Amazon S3 bucket, is going to be more reliable, lower maintenance, free(!), and more performant than managing those same files using a two-core, 2GB RAM, single 100GB HDD deployment of Minio.

In this case, deploying a cloud function is going to be more reliable, lower maintenance, cheaper (probably free), and more performant than deploying the same script to a tiny VM with a tiny webserver running on it.

---

I should especially emphasize the "free" part.

There's a big mental barrier that hobbyists have where they're basically unwilling to pay any kind of monthly fee to run a project — because they have so many nascent projects, and they're unsure whether those projects will ever amount to anything useful, or will just sit there rotting while costing them money, like a membership to a gym you never go to.

Being able to deploy a pre-production thing, and have it cost nothing in the months you don't touch it, creates a dramatic worldview shift, where suddenly it's okay to have O(N) projects "on the go" (but idle) at once.

(If you don't understand this, try putting yourself in the position of asking: "do I want to start paying monthly for a VM to host my projects, or is it higher-value to use the same 'fun money' to pay for an additional streaming service?")

---

But also, from another perspective: these "trivial costs" of a few dollars a month, add up, if for whatever reason you need to isolate workloads from one-another.

For example, if you're a teenager trying to do web-dev gig work on Fiverr, charging a flat fee (i.e. no pass-through OpEx billing) to deliver a complete solution to clients. Each client wants to be able to deploy updates to their thing, and wants that to be secure. How do you deliver that for them? Read Linux sysadmin books until you can set up a secure multitenant shell and web server, effectively becoming your own little professional SRE team of one? Or just build each of their sites using something like Vercel or Netlify, and give them the keys?

For another example, if you have personal projects that you don't want associated with your professional identity, then that'd be another VM, so another $4/mo to host those. If you have personal projects that are a bit "outre" that you don't even want associated with your other personal projects, then that'd be another $4/mo. If you do collaboration projects that you want to have a different security barrier, because you don't trust your collaborators on "your" VMs — another $4/mo per collab.

Why do this, when all these projects would collectively still be free-tier as functions?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: