There's nothing stopping anyone from implementing a cloud that runs CGI or FastCGI that scales down to zero (with per second billing), and scales up to infinity.
It's just that nobody has chosen to do so
Though I suppose not without reason. Google App Engine was one of the first "PaaS", billed on a fine-grained level, and it WAS initially based on CGI. Later they changed it to in-process WSGI, probably because working around CGI startup time is difficult / fiddly, and FastCGI has its own flaws and complexity.
I think it would have been better if they had developed some of the open standards like SCGI and FastCGI though. I think it could have made App Engine a more appealing product
If we're having fun with "everything old is new again", then I do remember some classic hosts for things like "Guestbook" perl CGI scripts would charge in those days per-"impression" (view/action), it's not quite CPU time but a close approximate you'd hope, and the associated costs would scale down to zero whether or not their hosting tech actually did. (Also, some of them certainly didn't scale to "infinity", though they tried.)
You have to consider that AWS Lambda does have "cold start" - if your code wasn't run for about 10 minutes it isn't "hot" anymore and will have a penalty time cost to its first next request. This is not billable but it is a latency, explained here [1]
Yes it's exactly like FastCGI ... if you make enough requests, then you have a warm process.
If you don't, then you may need to warm one up, and wait.
So yeah I think AWS Lambda and all "serverless" clouds should have been based on an open standard.
But TBH FastCGI is not perfect, as I note in my blog post.
The real problem is that doing standards is harder than not doing them. It's easier to write a proprietary system.
And people aren't incentivized to do that work anymore. (Or really they never were -- the Internet was funded by the US government, and the web came out of CERN ... not out of tech companies)
The best we can get is something like a big tightly-coupled Docker thing, and then Red Hat re-implements it with podman.
> There's nothing stopping anyone from implementing a cloud that runs CGI or FastCGI that scales down to zero (with per second billing), and scales up to infinity. It's just that nobody has chosen to do so
I think in discussions like this it's often helpful to think of Lambda not as a technology but as a product. Lambda isn't "CGI with extra steps", it's "CGI with per second billing", which is a rarer offering.
>There's nothing stopping anyone from implementing a cloud that runs CGI or FastCGI that scales down to zero (with per second billing), and scales up to infinity.
>It's just that nobody has chosen to do so
i suspect they have. i'm sure there's a ton of different in-house implementations out there at various enterprises, that are a minimal wrapper on aws lambda to turn an ALB request into a CGI request
It's just that nobody has chosen to do so
Though I suppose not without reason. Google App Engine was one of the first "PaaS", billed on a fine-grained level, and it WAS initially based on CGI. Later they changed it to in-process WSGI, probably because working around CGI startup time is difficult / fiddly, and FastCGI has its own flaws and complexity.
I think it would have been better if they had developed some of the open standards like SCGI and FastCGI though. I think it could have made App Engine a more appealing product
Comments on Scripting, CGI, and FastCGI - https://www.oilshell.org/blog/2024/06/cgi.html