It's not nuts to run servers. If an application is operating at any scale such that there is a nonstop stream of requests, then it will be cheaper, faster, and more energy-efficient to run a hot server. This follows from thermodynamics. No matter how good the cloud vendor's serverless is, it's always going to be less efficient than a server, unless it doesn't do any setup and teardown (i.e. no longer serverless).
It is nuts to run one server. Then you're wasting money with a server/VM. That's what serverless is ideal for: stuff no one uses. That's a real niche. Who's going to use that? Not profitable companies.
Often I think for most cases where you reach for serverless, you should reconsider the choice of a client-server architecture. An AWS Lambda isn't a server anymore; it's not "listening" to anything. Why can't the "client" do whatever the Lambda/RPC is doing?
Maybe what you want is just a convenient way to upload code and have it "just work" without thinking about system administration. The types of problems where you don't care about the OS is once again a niche. You probably don't even need new software for these kinds of things. You can just use SaaS products like Wordpress, Shopify, etc.
Serverless won't be profitable because the people who need it don't make money.
> Serverless won't be profitable because the people who need it don't make money.
You seem to implying only applications with huge numbers of users can be profitable. This statement ignores a tremendous amount of (typically B2B) applications that provide enormous value for their users but don't see a lot of traffic.
I have worked on applications that are at the core of profitable businesses yet they can go days, in some cases weeks without any usage. Serverless architecture will be a real benefit there once it matures.
I don't think that's necessarily true. Google Cloud functions give you 2 million invocations free a month - that's almost 1 per second. You can keep adding another 2 million for $0.40 at a time. It's not terrible.
I agree with the suggestion when to use a server, but I think making it out to be an obvious physical law is a bit too far.
Serveless runtimes can be massively multi-tenant, and in cases like Cloudflare have very little overhead per tenant, so they can share excess capacity for spikes, which you have to factor into your server. This gives them the ability to beat the server in thermodynamics. Maybe they will, maybe they won't but I don't think that's the argument that matters.
It is nuts to run one server. Then you're wasting money with a server/VM. That's what serverless is ideal for: stuff no one uses. That's a real niche. Who's going to use that? Not profitable companies.
Often I think for most cases where you reach for serverless, you should reconsider the choice of a client-server architecture. An AWS Lambda isn't a server anymore; it's not "listening" to anything. Why can't the "client" do whatever the Lambda/RPC is doing?
Maybe what you want is just a convenient way to upload code and have it "just work" without thinking about system administration. The types of problems where you don't care about the OS is once again a niche. You probably don't even need new software for these kinds of things. You can just use SaaS products like Wordpress, Shopify, etc.
Serverless won't be profitable because the people who need it don't make money.