Hacker News new | past | comments | ask | show | jobs | submit login

This advantage:

“Serverless models don’t require users to maintain their own operating systems, or even to build applications that are compatible with particular OSs. Instead, developers can produce generic code, and then upload it to the serverless framework, and watch it run.”

... is utterly compelling and is why serverless will not just win, but leave renting a server a tiny niche market that few developers will have an experience of post 2030.

Maintaining your own server is completely nuts. If that isn’t obvious now, it will be in another decade. It’s massively inefficient. Like running your own power plant to serve your factory, except you also have to worry about security and constant maintenance, along with all the moving parts that surround a server.

Almost all the objections in the article can be rephrased as “serverless is not mature enough yet”, and that’s accurate, but I suspect there’s also a bias against giving up control to the cloud companies, and some wishful thinking as a result.

The future of software development is going to be defined by cloud providers. They’re going to define the language ecosystem, the canonical architectures for apps etc... it’s just early days and cloud is really very primitive. Just clicking around Azure or GC or AWS illustrates how piecemeal everything is. But they have a lot to do, and just keeping pace with growth is probably hard enough. I’m not sure I’m super happy with this outcome, but I’m pretty certain the trend line is unmissable.




It's not clear to me how much experience with serverless architectures the author of the parent comment has, but speaking as someone with plenty, the operational costs of serverless are at least equal to managing stateful infrastructure, with much less control when things go wrong. Lambda was a major step up in long term predictability compared to for example App Engine, where there have been plenty of instances of overnight unannounced changes, or changes announced with incredibly short notice period, requiring developer time rather than ops time to bring an application back to service.

On the ops side even with a platform like Lambda, training an operations team to take over maintenance of a nested spaghetti of random interlinked services and bits of YAML trapped in random parts of the cloud is a total nightmare. The amount of documentation required and even the simple overhead of enumerating every dependency is a long term management burden in its own right. "The app is down" -> escalate to the developers every single time.

Compare that to "the app is down", "this app has basically no ops documentation", "try rebooting the instances", "ah wonderful, it came back"

I'm pro-cloud in many ways and even pro-serverless for certain problems, but let's not close our eyes and pretend dumping everything into these services is anything like a universal win.


This. 100 times this. Also, in several places most of the services downtimes are due to, you know what? Application bugs, not infrastructure outages. Sure, they happen as well and being on a good cloud provider mitigate a lot of them (but not all of them!) but if you increase the application design complexity you will increase also those downtimes. Yeah sure, there are tens of really good engineering departments where everything is properly CI?CD, automated, they can scale to thousand of services without skipping a beat... but that's not the reality for thousands of other smaller/less talented shops. So, "moving to serverless" will not just automagically fix all of your problems.

Also - and I'm an infra guy so I'm probably biased - I don't really get all this developer anxiety to outsource infra needs. Yeah if you are 2 devs working on your startup it makes sense, but when you scale up the team/company, even with serverless, you WILL need to dedicate time to infra/operations and do not dedicate it to strictly-business-related code. Having somebody dedicated to this is good for both.


You're getting rid of a whole lot of lower stack issues, and I say this as a part infra guy myself.

Yes, Ops will still have stuff to do, it will just be at another level.

It's inevitable, that's why we're not making our own memory cores anymore.


> try rebooting the instances

I haven't done anything with serverless, but surely the class of problems that would be fixed by an instance restart don't happen in the first place on serverless


It was intended more to invoke general ideas about management ease than being a specific remediation, however elsewhere in the thread there is an example of a diagramming tool split out across 37 individual AWS services/service instances. In a traditional design, this is conceivably something where all state and execution could easily fit in one VM, or perhaps one container with the state hoisted off to a managed service. In this case we could conceivably fix some problems with an app like that literally just by kicking the VM


I don't think you're wrong, I just think you're not looking far enough ahead.

What we have now is very primitive compared to how app development might work in the future; serverless is laying the foundation for a completely different way of thinking about software development.


I think I understand your point.

It's more back to the mainframe model of software development. I did this back in the 90s and I never had to think about scaling. Granted these were just simple crud / back-office apps.

But I can see how it would work for most modern software.


The mainframe model is viable (and legitimate) again because you can buy 128 core machines. That’ll have no problem running at people’s businesses


>training an operations team to take over maintenance of a nested spaghetti of random interlinked services and bits of YAML trapped in random parts of the cloud is a total nightmare. The amount of documentation required and even the simple overhead of enumerating every dependency is a long term management burden in its own right.

Upon learning about it some time ago, this was exactly my conception of what a Lambda-like serverless architecture would yield.

And it would seem difficult, if not impossible, for any dev to maintain a mental map of the architecture.


We had a microservice craze a few years ago. What about that, did they all crash?


> Maintaining your own server is completely nuts

I've in this area professionally for some time now, and I've never "maintained" servers in any reasonable sense. There are kernel people who maintain the kernel and there are Debian devs who maintain the operating system. The server may be mine (but more often that not, it isn't) but only in very specific circumstances do I ever concern myself with maintaining any part of this stack.

A vanilla Linux VM is a platform to build on. Just like AWS or anything else. It is the environment in which my software runs.

Thus far, something like Debian has been more stable and much less of a moving target than any proprietary platform has been, cloud or non-cloud. Should a client wish to minimize maintenance costs of the software over the coming decade, it is most cost effective to not depend on specialized, proprietary, platform components.

That may change in the future, but right now there is no indication that is the case.


That's exactly the argument why you should go serverless. If all you do is keep a vanilla Linux distro running in a VM with occasional updates and some initial config-magic (webserver, certs, ip tables, ssh etc.) why even bother? The serverless isn't going to be any different, other than it just runs. No need to make a cron for updates, no iptables, no certs or webserver-stuff ... just put your app on it and let it go. On the other hand, if you actually need to tinker with your OS, roll your own. But what you described is the prime example of serverless target audience.


One reason serverless isn't the solution for most applications is that you're basically making yourself entirely dependent on the API of one specific cloud host. If Lambda decides to double its price, there's nothing you can do about it but pay. If you need to store your data in a specific place (for example, in Russia, because all Russian PII must be stored in Russian borders), then you're out of luck. And best of luck to you if you're about to catch a big customer but they demand your application to run on their premises.

There's also the long-time guarantee; if you write an application that runs on Ubuntu Server or Windows Server now, you can bet your ass that it will still run, unchanged, for another 10 years. The only maintenance you need to do is to fix your own bugs and maybe help out with some database stuff. If you deploy a Lambda app now, you have nothing to guarantee compatibility for such a long time other than "Amazon probably won't change their API, I think".


You can also predict the costs, and shop around if your provider gets greedy. If Amazon changes its pricing structure, what are you supposed to do?


If you built everything in proprietary infrastructure, porting is a lot more work.

Using lambdas tie you to AWS because as soon as you use a few step functions, or you have a few lambdas interacting, changing to Azure or GCP becomes a huge pile of dev work and QA.

Having everything "just run" on linux instances let you be perfectly portable and now you can actually shop around.


It's a lot easier to change providers, if your service is an (eg.) docker image(s), than to move something relying on amazons api.


It's even easier if your application is just (an equivalent of) a binary, or a tarball.


Unless it relies on any system libraries, in which case you're in dependency hell when you upgrade or change the OS.


This is a solved problem, and dependency hell happen when not choosing dependencies wisely when creating programs.


Because you can reproduce that environment locally. Often useful for fixing bugs.


Also useful for doing the development in the first place!


Kubernetes, microservices, distributed systems, SPA apps. Getting a development environment to reproduce bugs takes up so much of my time these days (a fair bit is because of our crappy system, but there also a lot of inherent complexity in the system because of the architecture choices listed above). We get the promise of scaling, but most places making these choices don't actually need it.


The above comment was intended to answer exactly that: because it is much less maintenance in the long run.

Had you deployed an application to a proprietary cloud platform ten years ago, a handful of those services would have had their APIs changed or even been sunset by now.


Yeah but those "very specific circumstances" can come up at inconvenient moments. Certbot didn't run, disk got full, CPU hitting 100% for some reason, Debian needed to be upgraded, These are all things that need to be taken care of. Sometimes, right away and just when your family needs you.

I agree that it almost never happens, and that's why I run Debian as well. However, if you run production then things happen.


Just like when something unexpectedly breaks due to a component of the serverless environment that has been changed slightly causing your system to break, only then you have even less control when trying to debug the issue


I don't do any server admin. My code runs in docker on pretty much any server I can get my hands on. Some of my code runs on a ThinkPad stashed behind my desk, on DigitalOcean, on my Macbook. I could deploy to a Raspberry Pi and it would run just the same. It takes 10 minutes to deploy an exact copy to a new environment.

None of that requires OS maintenance. My house plants require far more maintenance than my software. I sometimes forget where some things run because I haven't touched them in years.

Serverless code runs on the cloud you built it for. I don't want that. I don't want to invest years of my life becoming an Amazon developer, writing Amazon code for Amazon servers.

That's without delving into the extra work serverless requires. There isn't a dollar amount tied to my import statements. I don't need software to help me graph and make sense of my infrastructure. I can run my code on my computer and debug it, even offline.


You probably know this but from what I read above it might be worth mentioning.

Your hosts are still packed with a bunch libraries and services (sshd for example) that should probably be updated with regularity.

I echo a lot of what you say here regarding run anywhere and not marrying some giant vendor.


Agreed.

On hosts I manage professionally, I update/upgrade weekly after reading the notes - it takes a few minutes, I know I'm up to date and if there is anything I should be wary of.

On a personal debian server, I have an update/dist-upgrade -y nightly on a cron job, and I reboot if I read on HN/slashdot/reddit/lwn about an important kernel fix; Never had an issue, and I suspect it's about as secure and trouble free as whatever is underlying lambda -- with the exception that every 3-4 years I have to do an OS upgrade.


> None of that requires OS maintenance. My house plants require far more maintenance than my software. I sometimes forget where some things run because I haven't touched them in years.

Then how do you know they are still secure and even working?

Yes, deploying servers is very easy, maintaining and securing them is the hard part. Sure, you can automate the updates and it will work with a good OS-Distribution for some years. But no system is perfect, exploits are everywhere, even in your own configuration. And then it becomes tricky to protect your data.


The best practices of private networks, updated software and being conservative when exposing ports to a public network everyone knows already.

Also no need to be that scared about servers.


What if your ThinkPad is unplugged? What if its HDD dies? Why aren't you updating the OS? Did you patch the firmware for Spectre/Meltdown? Etc, etc


OP definitely isn’t running code that needs an SLA if he’s talking about running on personal devices or DO


I doubt AWS/GCP/Azure care that much about hobby code. Hobby code, that humongous market for development platforms and development tools :-)


The HDD died already. I opened it, moved the stuck head with my fingers and shoved it back in. I have good backups, and as someone else said, if it was important, it would not run on a recycled laptop behind my desk.

That hard drive event showed me how disposable the machine itself has become thanks to docker.


You missed the point though. Serverless offers are commercial. They have little in common with your home made hobby stuff.

How much would you even pay for hosting? $2 a month?


I didn't miss the point. You singled out my ThinkPad and I'm answering your questions.

My point was about having portable code on generic hardware, which in my opinion is a better bet than writing Amazon software for Amazon servers, and praying their prices don't change much.


actually You can not run Your code smoothly on Rasberry PI because it has different architecture (ARM) and needs different sets of dependency images (If You are lucky at all Your deps are available on ARM)


It's not nuts to run servers. If an application is operating at any scale such that there is a nonstop stream of requests, then it will be cheaper, faster, and more energy-efficient to run a hot server. This follows from thermodynamics. No matter how good the cloud vendor's serverless is, it's always going to be less efficient than a server, unless it doesn't do any setup and teardown (i.e. no longer serverless).

It is nuts to run one server. Then you're wasting money with a server/VM. That's what serverless is ideal for: stuff no one uses. That's a real niche. Who's going to use that? Not profitable companies.

Often I think for most cases where you reach for serverless, you should reconsider the choice of a client-server architecture. An AWS Lambda isn't a server anymore; it's not "listening" to anything. Why can't the "client" do whatever the Lambda/RPC is doing?

Maybe what you want is just a convenient way to upload code and have it "just work" without thinking about system administration. The types of problems where you don't care about the OS is once again a niche. You probably don't even need new software for these kinds of things. You can just use SaaS products like Wordpress, Shopify, etc.

Serverless won't be profitable because the people who need it don't make money.


> Serverless won't be profitable because the people who need it don't make money.

You seem to implying only applications with huge numbers of users can be profitable. This statement ignores a tremendous amount of (typically B2B) applications that provide enormous value for their users but don't see a lot of traffic.

I have worked on applications that are at the core of profitable businesses yet they can go days, in some cases weeks without any usage. Serverless architecture will be a real benefit there once it matures.


I don't think that's necessarily true. Google Cloud functions give you 2 million invocations free a month - that's almost 1 per second. You can keep adding another 2 million for $0.40 at a time. It's not terrible.


I agree with the suggestion when to use a server, but I think making it out to be an obvious physical law is a bit too far. Serveless runtimes can be massively multi-tenant, and in cases like Cloudflare have very little overhead per tenant, so they can share excess capacity for spikes, which you have to factor into your server. This gives them the ability to beat the server in thermodynamics. Maybe they will, maybe they won't but I don't think that's the argument that matters.


We use serverless in a project and our monthly revenue is $6K. It is not true to assert that only people with no revenue are using serverless.


> Maintaining your own server is completely nuts. If that isn’t obvious now, it will be in another decade. It’s massively inefficient. Like running your own power plant to serve your factory, except you also have to worry about security and constant maintenance, along with all the moving parts that surround a server.

Except that it is not. The security and constant maintenance is needed but it is worth in many cases. And large companies can not really off load all ownership of data and applications. The interest in servers actually went in reverse direction due to the cloud effect.

I would say having own server or application hosting capacity is very similar to why you produce your own solar power and store it in batteries - it is simple? No. But the technology is improving and thus it makes it easier for people to adopt this paradigm.

When I see what is happening with WebAssembly/WASI in particular, I see a great future for self-hosting again. Software written in any programming language (as long as it targets WASM) is a lot easier to host than existing models. Also there is interoperability of software coming from different languages at the WebAssemply level as I understand.


> Software written in any programming language (as long as it targets WASM) is a lot easier to host than existing models.

For the past 20 years you have been able to target x86 gnu/linux and have it running without modification on a readily available server, either your own hardware or rented/public cloud. How does switching from one binary format to another (x86 to WASM) change anything (except maybe slowing down your code)? As I understand it, the main draw of WASM is running non-JS code in a web browser.


> As I understand it, the main draw of WASM is running non-JS code in a web browser.

Simplified deployment + automatic sandboxing?

AFAIK with x86 you can't just write a client app and have it automatically run on any computer that visits your website.


Are people reinventing java as if it never existed?


No. Only as if the user experience was seriously flawed.


> developers can produce generic code

There is nothing generic about the code that runs on serverless services. It’s the ultimate lock in.


I use Google Cloud Run to run my serverless code for exactly this reason. GCR is literally just a container that runs on demand (with scaling to 0). Literally the only GCR specific part is making sure the service listens on the PORT env. If I was so inclined, I could deploy the exact same container on any number of services, host it myself and/or run it on my laptop for development purposes. There's also Kubernetes Knative which is basically (afaik) self hosted GCR.


Cloud Run is excellent if I wrote my own application. My biggest issue is that most off the shelf open source software that has a docker container often use a complicated docker compose file so if they can be deployed, they might be waiting on each other's cold starts (which can be very long and expensive) and/or need more databasesish things than I want. So obvious mistakes and unrealistic expectations aside, I have several nodejs and crystal apps on cloud run which is running well and just the concept "lambda for docker containers" is pretty awesome. Cold starts are pretty harsh ATM, hopefully they improve.


How is the spin-up time for such a container?


It varies massively based on the tech stack. I have seen both simple Spring Boot and Quarkus apps take in excess of 10 seconds to start up in JVM mode. However, Quarkus compiled to native binary with GraalVM starts consistently under a second (in the ~500ms range). This is still brutal compared to running it on a vanilla MacBook which usually takes less than 20ms.



In my experience pretty good, even for a Django app. I believe the container sticks around for a while and is throttled to 0 CPU.


I don't agree, serverless code in itself tends to be portable. It's the surrounding services that lock you in.


That depends a lot on the use case. The ephemeral nature of serverless environment generally require you to proprietary solutions by the cloud provider to have things like DB access and such. So you end up using DynamoDB instead of Postgres (as an example). You CAN make portable serverless code but it generally requires a fair amount of work to do so.


Most actual FaaS code is quite portable; the configuration is what can’t move easily. And things like OpenFaaS and kNative make self-hosted runtimes completely flexible- it’s just a short-lived container.


At the moment.

Is there anything stopping an organisation from defining some standard types of serverless environments?

Is there anything stopping someone from turning that standard into implementations to help cloud providers offer it, or even be a fallback option that could be deployed on any generic cloud infrastructure?

I think those are the way forward from here.


The point of serverless for vendors is lock-in. Everything else about it is an annoyance, from their side (having to manage lifecycle, controlling load in shared engines, measuring resource usage...). But it locks people in and can be slapped with basically-arbitrary prices. The incentives to set standards are exactly zero, because once all providers support them and easy portability is achieved, it becomes a race to the bottom on price. Unless vendors can come up with some extra value-added service on top and choose to commoditize away the serverless layer, there won’t be any standard.


> Is there anything stopping an organisation from defining some standard types of serverless environments?

Yes. Basic economics. There is nothing _technical_ stopping 'an organisation' from making a federated twitter or facebook. But there are (evidently) insurmountable non-technical reasons: It hasn't happened / there have been attempts which have all effectively failed (in the sense that they have made no significant dent in these services' user numbers).

Why would e.g. Amazon (AWS) attempt to form a consortium or otherwise work together or follow a standard, relegating their offerings to the ultimate in elasticity? Economically speaking, selling grain is a bad business: If someone else sells it for 1ct less per kilo then the vast majority of your customers will go buy from someone else, there's no product differentiation.

Serverless lockin (such as GAE or AWS Lambda) is the opposite. No matter how expensive you make the service, your users will stay for quite a while. But make a universal standard and you fly in one fell swoop to the other end of the spectrum. If I have a serverless deployment and the warts of serverless are fixed (which would, presumably, involve the ability to go to my source repo, run a single command, give it some credentials, and my service is now live after some compilation and uploading occurs) - then if someone else offers it 1ct cheaper tomorrow I'll probably just switch for the month. Why not?

This cycle can be broken; but you're going to have to paint me a picture on how this happens. Government intervention? A social movement amongst CEOs (After the war, there was a lot of this going around)? A social movement amongst users so aggressive they demand it? Possible, but that would require that we all NOT go to serverless until the services offering it come up with a workable standard and make commitments to it.


I think it can happen simply through one serverless offering becoming very popular and other services (or open source projects) trying to reimplement the API of that offering. To some extent, this happened with Google App Engine. (AppScale)

I think cloud customers are savvy to the lock-in. That we're having this conversation in evidence of that. Perhaps AWS can achieve adoption of Lambda without needing to cater to customers who are cautious about getting locked in, but any challenger might find that it's much easier to gain customers if they also provide some form of an escape hatch.

As Jeff Bezos would say about retail, "your margin is my opportunity."


> Is there anything stopping an organisation from defining some standard types of serverless environments?

We have some already, e.g. RFC3875 and its descendents https://tools.ietf.org/html/rfc3875


That standard could be something like CloudABI [1] which already exists, but is too limited (no networking) to replace generic serverless workloads.

[1] https://lwn.net/Articles/674770/


I disagree. Every AWS Lambda function I've ever written can be ran as a regular node/python process. The lambda-specific part is miniscule. If I wanted to run these on Azure or Google only the most inconsequential parts of the function would need to be changed.


In my experience, having started and abandoned side projects in both aws lambda and google app engine, half your project becomes:

* Well, obviously we use a hosted database

* And obviously, AWS provides our logging and all our analytics.

* Obviously when people call our lambda functions, they do so either through an AWS-specific API, or one constrained to a very limited set of forms.

* Of course, we can't blindly let everything access everything, so naturally we have IAM roles and permissions for every lambda function.

* Well, the cloud provider will look after secrets and things like that for us, no need for us to worry about database passwords.

* Naturally, with all these functions and IAM roles to look after, and we need tagging for billing. We should define it all with CloudFormation scripting.

* Well, the nosql database the they provide comes with their specific library. And as it shards things like this, and doesn't let you index things like that, you've got to structure your data this specific way if you want to avoid performance problems.

* You don't want your function to take 200ms+ to respond, your users will notice how slow it is. So no installing things with apt-get or pip for you, let me get you a guide on how to repackage those into the vendor-specific bundle format.

* You want to test your functions locally, with access to an interactive debugger? You're living in the past, modern developers deploy to a beta environment and debug exclusively with print statements.

* And so on.

In this case, a lot of the 'complexity' one hoped to eliminate has just been moved into XML files and weird console GUIs.


This, but it was told by serverless experts, on stage, in front of hundreds of people. If that was their sales pitch, their reality was likely even less impressive.


This sounds a bit like the posh workshop I went to on WAP (wireless access protocol) years ago when I worked for BT.

It was a complete omnishambles - to the point I avoided the fancy lunch and went to the Pub for a ploughman's lunch, In case I suddenly blurted out "this is all S*&T" and caused a political row with the mobile side of the company I worked for.


Never tried something marketed as Serverless but Google App Engine. If you wanted any performance, you had to follow the Guidelines really closely. Which could be legit, if it's worth the effort but I think it isn't. I think people under-estimate the effort and that the code will require much more not so nice optimizations than expected. That includes very verbose logging. It's sold as carefree and elegant but it only works when using patterns that nobody enjoys. I really liked the log browser and Dashboard though ;) It's like a stripped down version of New Relic and Elastic Search combined.


Most of your points are not relevant to my original statement of the code being generic - they are talking more about the architectural decisions. If I want to use DynamoDB I can do so in EC2 or Lambda, serverless doesn't dictate that. You also seem to believe one chooses Lambda because of complexity reduction and that's not really the only reason. I can very easily port a node/express API backend that connect to RDS to any other cloud provider. What about serverless makes you think that's not the case?


Then your service becomes just a tiny stub of vestigial logic buried under layers and layers of YAML, vendor libraries and locked assumptions.


Or even worse, giant yaml config files


> Every AWS Lambda function I've ever written can be ran as a regular node/python process. The lambda-specific part is miniscule.

Of the actual function code, sure.

Of course, if you aren't manually configuring everything and are doing IaC, that isn't generic. And if you are supporting the Lambda with any other AWS serverless services, the code interfacing with them is pretty AWS specific.


I was only talking about function code. It should be painfully obvious to anyone that if you opt to leverage other AWS services that those are things you would ultimately have to replace but those decisions have nothing to do with serverless.


> but those decisions have nothing to do with serverless.

Sure they do.

Because if you need a DB for persistence you are either setting up a server for it (and therefore not serverless, even if part of your system uses a Lambda) or consuming a serverless DB service. And so on for other parts of the stack. On AWS, for DB, that might be Dynamo (lock-in heavy) or, say, Aurora Serverless (no real lock-in if you don't use the RDS Data API but instead use the normal MySQL or Postgres API, but that's higher friction—more involved VPC setup—to use from Lambda than RDS Data API is, so the path of least resistance leads to lockin.)

Lambda or other FaaS is often part of a serverless solution, but is rarely a serverless solution by itself.


Using a stack like serverless framework the amount of lock-in is negligible.


What are you going to do when the container solution / node.js instance / insert x component here crashes? Wait for support to do something about it when you can fix it instantly? Or when you want to deploy a gRPC / crypto daemon to communicate with your back-end?

As an experienced back-end developer and linux user I would pull my hair out if I was completely helpless in fixing an issue or implementing some side thing that requires shell access. I don't want to wait for some guy in Philippines who will be online 12 hours later to come try to fix it.


Well, when will it be mature and what will it take to get there?

My first experience with serverless architecture was back in 2007 or so when trying to port Google News to App Engine. That was a thoroughly painful experience, and things haven't exactly gotten much easier since. If you go back in time a decade, Google's strategy for selling compute capacity was App Engine. Amazon went the EC2 route. Reality suggests AWS made the better choice.

I can understand the superficial notion that having idling virtual machines is inefficient (because it is). But this reminds me a bit of tricks we did to increase disk throughput in systems with lots of SCSI disks back in the day. Our observation was that if we could keep the operation queues on every controller full as much of the time as possible, we'd get performance gains when controllers were given some leeway to re-order operations, resulting in slightly better throughput. Overall you got higher throughput, and thus higher efficiency, but from the perspective of each process in the system, the response was sluggish and unpredictable. Meaning that if you were to try to do any online transactions, it would perform very poorly.

For a solution to be desirable it has to be matched with a problem that needs solving.

As the article points out, there are some scenarios where serverless architectures might be a good design paradigm. But extrapolating this to the assumption that this paradigm is a universal solution requires not only a leap of faith, but it also requires us to ignore observed reality.

So you owe me some homework. Tell me what needs to happen for serverless architectures to reach "maturity".


The midway point between "maintain your own servers" and "wholly in the cloud" is "configuration as code", using chef and terraform, or similar tools.

You don't patch your OS, or your apps, you define their versions and configuration in code and it gets built for you. And typically snapshotted at that point and made into a restartable instance image that can be simply thrown away if it's misbehaving, and rerun from the known good image.


Sometimes I wonder if this is propaganda by cloud/serverless providers to get everyone to jump on it and get locked in. The serverless black box kind of sucks, apart from “auto scaling” stuff. Crap performance too.


Auto-scaling is usually a myth anyway. You have to understand your system deeply and where all the bottlenecks are to really scale. If you have a part that's a big black box, that's going to get in the way of that.


I've never really understood this argument for serverless. Everything you do in AWS is through an API. I've never quite understood how replacing one set of API calls to provision an EC2 (or ECS cluster) is so much more complicated than another set of API calls to create a severless stack. If anything, my experience has been the complete opposite. Provisioning a serverless stack is much more complicated and opaque.


your "serverless" framework becomes your operatings system. Just a bad one.


> Serverless models don’t require users to maintain their own operating systems, or even to build applications that are compatible with particular OSs. Instead, developers can produce generic code, and then upload it to the serverless framework, and watch it run.

Instead, developers can build applications that are compatible with particular serverless frameworks.


>The future of software development is going to be defined by cloud providers. They’re going to define the language ecosystem, the canonical architectures for apps etc...

That's... quite depressing to consider, actually. I long for a return to the internet of yore when Native apps were still king and not everything was as-a-service.


I agree. But you can still do native apps today, if you’re willing to jump through all the hoops that OS vendors set in your path and maybe renounce a couple of platforms (like chromebooks). Unfortunately, now that basically OS vendors are also effectively cloud providers, their incentives are set to increase those hoops (“for secuyriteh”), nudging more people towards “easier” cloud deployments.


It's massively inefficient, but it's also still massively safe in comparison, regarding soveignerity.

Like, today, given the political uncertainty in the USA, any large company would be nuts to bet on hosting their critical services on US-dependant infrastructure without having a huge plan B already in the works.


Is there a serverless provider with a "self hosted node" option that could be used as a fallback? That pretty much defeats the purpose of serverless, but at least you could hang on while figuring out how to transition to a new solution if the provider fails.

That has always been my sticking point. With AWS in particular. I don't trust AWS to exist forever, and it's definitely not without it own ongoing maintenance issues. Locking my entire business into their ecosystem seems risky at best.

Something I see really often in startups is huge dependencies in the form of SaaS. It should be no secret to those in tech that many of these businesses will not be around in 3 years. Even the likelihood of their service staying the same for 3 years is pretty low. I have been bitten by enough deprecated services, APIs and Incredible Journeys that I am wary.


In my experience, maybe it just isn't there yet, serverless has much more points of failure than a conventional infrastructure. I am sure there a lot of software that really needs the scale the cloud can provide. Funny thing is that we tend to use it for tiny apps, some IOT voice interfaces or BTB tools that we don't want in our corporate network.

I doubt cloud providers can dictate environments, other providers would quickly fill the gap to meet developer preferences. I also think that more developers care about lock in these days.


> The future of software development is going to be defined by cloud providers

That's probably true of web development, but "software development" writ large is much more than just cloud providers and webapps. "Software" encompasses everything from embedded microcontrollers to big iron mainframes that drive (yes, even today, even in 2030) much of the world's energy, transportation, financial and governmental infrastructure.


I think you've made a fair point, which I did have in mind but didn't write down - that there is a dichotomy between embedded and cloud systems, and the definition of software development will be to a lesser extent defined by the embedded side. Apple, for example, will have clout.

But long-term I think the cloud providers will assimilate so much power that the embedded side will follow its diktats.

Big iron mainframes have longevity, but will absolutely die out close to complete extinction - I've worked on those systems, I understand their strengths and the legacy issues, and there's no way that cloud isn't going to gobble up that market, it's just going to take a long time (as you say, beyond 2030).


On the other hand, perhaps endeavors akin to NixOS and Guix will make maintaining your own operating system trivial.


Serverless is already here, it's just unevenly distributed. Instead it's called (managed) Kubernetes and yeah, you need a devops team and there's still a bunch of overhead but like you say - the writing's on the wall.


>> Like running your own power plant to serve your factory

With newer power technologies becoming more affordable and effective, solar, wind, & storage are increasingly being used to power factories and other businesses.

It's all about control of your product and operations. If it is economically feasible, it's always better to control your own stack all the way down.

Does serverleds actually deliver better control over your development, portability, reliability, security, etc. for your application & situation, or not?

This sounds a bit like the "When will they turn off the last mainframe?" arguments a while back - I wouldn't expect servers to disappear either...


Anyone moving to Hitrust or SOC2 loves serverless for that exact reason. When asked how I maintain my infrastructure, I point to RDS, API gateway, and Lambda. This leaves my security mostly free to focus on application level security.


Those are the same arguments as the ones put forth in the article under "the promise of serverless computing". It remains to be seen if they can be realized without the downsides.


> When put like that, it’s amazing that we didn’t come up with this idea earlier.

It's called time-sharing and it existed in the 1960's. [1]

[1] https://en.wikipedia.org/wiki/Time-sharing

The difference between time-sharing and serverless is that the former solved the issue of expensive personal computing, until cheap personal computers took over that market. The latter solves perceived expensive computing on the "server" side.

But what does it serverless solve exactly? It doesn't solve a technical problem, rather addresses concerns on the business side. Serverless solves a cost problem.

First, computing needs aren't linear, they fluctuate. And so, there's a problem of under- and over-utilization vs availability of resources. Serverless approaches computing power like tapwater: you're essentially paying for the CPU time you end up using.

Second, elasticity. Instead of having staff struggle - losing time - with the fine intricacies of autobalancers, sharding and what not; you outsource that entirely to a cloud provider. Just like a tap, if you need more power, you just turn the tap open a bit more.

Finally, serverless services abstract any and all low level concepts away. Developers just throw functions in an abstraction. The actual processing is entirely black box. No need to worry about the inner details of the box.

Sounds like a good deal, right?

> Like running your own power plant to serve your factory, except you also have to worry about security and constant maintenance, along with all the moving parts that surround a server.

Well... no. Outsourcing all of that to a third party cloud computing vendor doesn't dismiss you from your responsibility. All it does is shift accountability to the cloud provider who agreed to take you on as their customer. Securing your factory still very much includes deploying a secure digital solution to manage your machinery and process lines.

Plenty of industries wouldn't even remotely consider outsourcing critical parts of their operations, and this would include digital infrastructure. And this is regardless of the maturity of serverless technology. Risk management is a vast field in that regard.

Then there's legal compliance. There are plenty of industry specific regulations that simply don't even allow data to be processed by third party cloud services unless stringent conditions are adhered to. Medicine, banking and insurance come to mind.

Finally, when it comes to business critical processes, businesses aren't interested in upgrading to the latest technology for the sake of it being cutting edge. They want a solution that solves their problem and keeps solving that problem for many years to come. Without having to re-invest year after year in upgrades, migrations and changes because API's and services keep shifting.

Does that mean that there isn't a market for serverless computing? Of course there is. Serverless computing is a JIT solution. It's an excellent solution for businesses in a particular stage of their growth. And it closes the gap for plenty of fields where there really is a good match. I just feel that "maintaining your own server is completely nuts" is a bit overconfident here;


We had that utterly compelling framework in the early 2000s. Write generic code, upload to any provider you want, watch it run - that's exactly how shared-hosting PHP worked in ye olden days and to this day no one has made a developer experience as nice and It Just Works as that.


This on the assumption that there isn't a balkanization of the serverless frameworks between providers. History doesn't bear this. In 2020 we still need tools like BrowserStack even after decades now of web developers complaining about the fragmented ecosystem.

Instead of maintaining software compatible with the right operating system, you're maintaining software compatible with the right flavor of serverless by Cloud Provider. Now we're back to square one on at least one front.

On the control aspect, the bias against giving up control is not an unwarranted one. Maintaining control of critical infrastructure is extremely important. And in fact outsourcing your critical infrastructure is an existential one, and not just in an academic sense. When you give up control you give up the ability to prevent your infrastructure being hostile, but incompetent. In these cases it reduces the quality of your product for your customers.

I won't even go into the anti-competitive tactics Amazon themselves get into that make them not a good choice for your infrastructure. Instead I'll draw upon a recent experience that illustrates why outsourcing infrastructure, even at a higher level, is a bad idea.

My girlfriend recently was taking her NLN exam remotely. They weren't allowed to use calculators of their own, they had to use a virtual calculator provided by the company administering the test. Like most of these companies they are doing remote proctoring of the exams. During her exam this virtual calculator flat out wasn't available. The proctor told her to simply click through the exam and that once she submitted it she'd be able to call into customer service to get the exam re-scheduled due to the technical difficulties.

Well, that wasn't the case. After doing some deep digging for her, here is what I found. The testmaker NLN contracted out the test administration to a third party, Questionmark. Questionmark in turn contracted out yet another 3rd party, Examity to handle the proctoring. Examity proctors don't have access to Questionmark's systems. Questionmark doesn't have access to NLN's systems, etc. So how did we get this resolved? I had to track down the CEO of Questionmark, the CEO of Examity, and the head of testing for NLN. I had to reach out through LinkedIn inmail to get this on their radar. And then it was handled(quickly and efficiently I might add!). However, frontline support for each of these companies could do nothing. They just had to offload blame onto the support staff of each other. Another aspect of this is that each handoff between 3rd parties creates a communication barrier. In this case the communication barrier seems to have kept Questionmark from configuring this specific test correctly. I wouldn't blame any of these companies for this specific failure mode because it's just the nature of what happens when you offload your work to 3rd parties.

When you say, "Oh it's great we don't have to worry about X because Y can do it." The implication is that you lose all of the power of a vertically integrated company by essentially spinning off tons of subsidiaries and creating a communication overhead both before and when problems DO arise.

What is the future of software development going to look like when it reaches consumers and you have to say "Sorry, we can't fix that issue because Cloud Provider has to get back to me, and then in the background Cloud Provider has to say sorry we can't get back to you because we have to wait for our spunoff hardware division to get back to us?"

Maybe this type of business is fine for fun apps, but it's not fine for a lot of businesses. Even SLAs and disclaiming responsibility in your own contracts won't save your reputation. All it does is protect you financially!


Maintaining your own server is completely nuts. If that isn’t obvious now, it will be in another decade. It’s massively inefficient. Like running your own power plant to serve your factory, except you also have to worry about security and constant maintenance, along with all the moving parts that surround a server.

TANSTAAFL. Let’s say serverless becomes commoditised the same way as electricity. What are the margins in that business? What are AWS etc margins now?

There are very strong reasons to believe that serverless will offer convenience at a premium price, forever.


How about someone else is controlling your infrastructure and your project completely depends on their good will.

We need self-hosted serverless...


Self hosted serverless has also been around for years. Lookup openshift, openwhisk, etc.

TFA is clickbait


I mean as the main paradigm of hosting, now its beyond niche and everyone is happy to go to google, amazon and microsoft. What does TFA mean?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: