Hacker News new | past | comments | ask | show | jobs | submit login

Please don't do this.

I have seen this in practice. A simple CRUD app split across literally 100 of "re-usable" repositories. The business logic is all over the place and impossible to reason about. Especially with step-functions, now the logic is both in the code and on the cloud-level. Each developer gets siloed into their little part and not able to run the whole app locally.

The whole thing could easily fit in a single VM as a Rails or Django application.

The only one that will be happy about this is AWS and contractors because it's a guaranteed lock-in.




What you (and many others here) are objecting to is completely orthogonal to serverless as an architecture.

There's nothing about serverless that requires separate repositories or even microservices. I know because I have built a 50,000 line serverless application that is a single repository and deploys functionally as a monolith.

We also don't use step functions heavily, because, like you say, that's basically a cloud-specific DSL that you could write in a real programming language with only slightly worse visibility.

Serverless is, plain and simple, about passing immutable "partial computation" state through events and keeping all mutable long term state in a database.


The whole "serverless" fad is about lock-in.

I also really wish they would not have squatted that term. It should refer to decentralized P2P systems which are truly serverless.


I build off of serverless. Not very locked in - I wrote my service to run locally without any AWS services at one point. The main 'lock in' has nothing to do with serverless - it's DynamoDB. DynamoDB is really good so I'm reluctant to port to other clouds without something that has a very similar model. If having a really good service is "lock-in", sure, we're locked in.


> The whole "serverless" fad is about lock-in.

I don't believe that's true at all.

Serverless stuff in gener and function-as-a-service solutions such as the example described in this discussion in particular is in my opinion about placing the needs of the service provider before the needs of the customers.

More specifically, it's about enabling the service provider to waste less computational resources while providing the exact same service.

For example, how many VM instances would you need to put up an API Gateway distributing a couple of HTTP requests to at least one instance of a HTTP server whose only responsibility is to trigger a workflow or update a database? How many instances would you need to keep up to keep a database running that barely has 2 or 3 tables? How many instances would you need to launch to run a batch job that does nothing more than sniff a bunch of files?

Without considering any concern with availability, that's about 3 to 4 instances. All idling, and mostly running support stuff that needs to be there just for your service to be able to handle a request.

This might be your go-to solution for this sort of service, but in the eyes of a service provider that is sinfully wasteful. I mean, half a dozen instances with a utilization rate that barely breaks off 50% just to do the same thing everybody else is already doing?

So, why not cut with all that bullshit and simply tweak a shared API Gateway/message broker/background task/workflow automation/pubsub/database/data store service to do the stuff you need to do?

If you use the communal service and let the service provider manage it with it's dedicated staff, the company doesn't waste half of its computational resources idling by or just duplicating the service everyone is already using.

That's less hardware to power, less hardware to provision, less hardware to maintain, less spiky hardware utilization rates... Less work and less costs.

If you want to discuss lock-in then focus on IAM. Everything else is a way to help the service provider better utilize their current capabilities.


Even without serverless you are pretty locked in. Things like Terraform help but the level of cloud integration required for a complex system is pretty overwhelming.


If its "required" you need to think more.

We use GCP and GKS for some things, but we could leave pretty easily. Hardest parts would be DIY Postgres HA and DIY K8S. CockroachDB is almost ready to get rid of the former headache. Haven't looked into the latter headache yet. Of course we really only use K8S for load balancing and HA and there are other options there like Consul and Nomad.


I’m glad you are able to move your use case easily!


Try Patroni for Postgres HA, seems really trivial.


Thank you for this comment I've experienced this exact same nightmare in my current Job, a serverless hell


Can you expand on this with specifics please? Our serverless code is all in one repo and we’re pretty happy with it.


A bit of a translation barrier so I'm not positive but the company advocating this appears to be an agency that builds things for multiple clients. For that case it makes more sense to split out capabilities like that so they can easily pull them in like Lego blocks to other projects. For everyone else I'd advocate starting with a monolith until you need to split things.


Makes sense for whom? Maybe for them, certainly not for their clients.


> Makes sense for whom? Maybe for them, certainly not for their clients.

It makes sense for the system, and thus for the client.

Just because there are a lot of boxes in the diagram that does not mean the system is complex. A lambda is just a stateless message handler. Half the diagram are lambdas. The rest is just a few persistence services and then stuff related to external interfaces, likr DNS services and API gateways and auth and pubsub.

Any desktop app is way way more complex than this. A dialog box has easily twice the number of handlers.


If the firm is fast and produces reliable platforms at the cost the client is looking for then their clients couldn’t care less whether it’s Django (sigh) or Lambdas.


Complexity in any form can be lock-in to contractors, eng. teams, vendors, etc.

Make no mistake, serverless cloud services are the greatest building blocks of all time. That doesn't mean you should use all of them, nor use them in the most granular manner.

As always, the architect should make prudent decisions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: