Kind of surprised the article didn't mention lack of reasonable development environment.
At least on AWS, the "SAM" experience has been probably the worst development experience I've ever had in ~20 years of web development.
It's so slow (iteration speed) and you need to jump through a billion hoops of complexity all over the place. Even dealing with something as simple as loading environment variables for both local and "real" function invokes required way too much effort.
Note: I'm not working with this tech by choice. It's for a bit of client work. I think their use case for Serverless makes sense (calling something very infrequently that glues together a few AWS resources).
> It's so slow (iteration speed) and you need to jump through a billion hoops of complexity all over the place. Even dealing with something as simple as loading environment variables for both local and "real" function invokes required way too much effort.
Honestly, it reminds me of PHP development years ago: running it locally sucked, so you need to upload it to the server and test your work. It. Sucked.
It was actually pretty good if you had an IDE with sftp/scp support because you could save a file, refresh your browser, and have immediate new results.
Yeah this wasn't too bad and it was what I used to do back in the day with Notepad++. By the time you hit save in your editor, your changes were ready to be reloaded in the browser.
With SAM we're talking ~6-7 seconds with an SSD to build + LOCALLY invoke a new copy of a fairly simple function where you're changing 1 line of code, and you need to do this every time you change your code.
That's even with creating a custom Makefile command to roll up the build + invoke into 1 human action. The wait time is purely waiting for SAM to do what it needs to do.
With a more traditional non-Serverless set up, with or without Docker (using volumes) the turn around time is effectively instant. You can make your code change and by the time you reload your browser it's all good to go. This is speaking from a Python, Ruby, Elixir and Node POV.
The workaround my team uses is to make two entries to start the webapp: one for SAM, one for local. For fast iteration we just `npm start`, and when we're ready to do more elaborated testing we run with SAM. This works pretty well so far.
I'm not sure why that's PHP's fault? I never had problems running it locally... and to "get my code to the servers" was as easy as a git pull from the server which is probably the 2nd laziest way of accomplishing that.
Out of genuine interest... is there a modern solution to this problem with PHP/MySQL?
(I'm still doing the "upload to server to test" thing.... I've tried MAMP and Vagrant/VirtualBox for local dev but both of them seem horribly complex compared to what we can do with local dev with node.js/mongo and so on.)
"docker-compose up" and your OS, codebase, dependecies and data is up and running locally in the exposed local port of your preference. You can even split parts in different layers to mimic a services/cross-region logic.
Of course this won't fix the fact that you have a lambda behind api gateway that does some heic->jpg conversion and can't be hit outside the DMZ, or some esoteric SQS queue that you can't mimmic locally - but it should get you almost there.
this doesnt solve the OPs problems though, if Vagrant is complex, so would be a docker image. the problem is the user doesn't know how to manage/configure the underlying complexity of the OS and needed services, which would still be a problem if using docker. unless you find that perfect docker image with every dep you need... but that would also be true with vagrant.
FWIW I haven't hit any scenario out of the basic services that localstack couldn't run locally. I even have it executing Terraform on localstack as if it was AWS (without IAM which is indeed a problem when I forget to create/update policies)!
Just run PHP and MySQL locally? Native PHP on Windows is horrible to set up, but with WSL/WSL2 you suddenly can get a normal Linux environment without much hassle.
sudo apt install nginx php mysql, point the www directory of nginx to something on your computer (/mnt/c/projects/xyz) and you've got a running setup. Or run Linux in general, that's what most people I've seen work on backends seem to do. You can run the same flavour of software that your deployment server runs so it'll save you time testing and comparing API changes or version incompatibilities as well.
I don't know any solution for macOS but you can probably get the necessary software via Brew if you're so inclined. Then run the built-in PHP web server (php -s) and you get the same effect.
What's horrible about it? I just download it, unzip to Program Files, add the folder to my %PATH% and that's about it. I didn't find myself in a situation where I would need an Apache or other webserver, the built-in one is good enough. It also makes using different versions easy, no need to deal with Apache and CGI/FPM. You just use other PHP executable.
I find it easier to handle multiple PHP versions on Windows than on Linux. As you say just download zip,unpack somewhere copy php.ini-development to php.ini, and you can do this for every minor PHP-version.
Apache is almost as easy, download zip, unpack & configure apache conf to use your php.
MySQL is somewhat more complicated because you need to run an setup script after unpacking the zip.
I used to be complain about the same thing and even asked someone who was head of BD for Serverless at AWS what they recommended, and didn't get an answer to my satisfaction. After working with more and more serverless applications (despite the development pains, the business value was still justified) I realized that local development was difficult because I was coupling my code to the delivery. This is similar to the way you shouldn't couple your code to your database implementation. Instead, you can write a function that takes parameters from elsewhere and call your business logic there. It definitely adds a bit more work, but it alleviates quite a bit of pain that comes with Lambda local development.
Disclaimer: I work at AWS, however not for any service or marketing team. Opinions are my own.
> Instead, you can write a function that takes parameters from elsewhere and call your business logic there.
This is what I tried to do initially after experiencing the dev pain for only a few minutes.
But unfortunately this doesn't work very well in anything but the most trivial case because as soon as your lambda has a 3rd party package dependency you need to install that dependency somehow.
For example, let's say you have a Python lambda that does some stuff, writes the result to postgres and then sends a webhook out using the requests library.
That means your code needs access to a postgres database library and the requests library to send a webhook response.
Suddenly you need to pollute your dev environment with these dependencies to even run the thing outside of lambda and every dev needs to follow a 100 step README file to get these dependencies installed and now we're back to pre-Docker days.
Or you spin up your own Docker container with a volume mount and manage all of the complexity on your own. It seems criminal to create your own Dockerfile just to develop the business logic of a lambda where you only use that Dockerfile for development.
Then there's the whole problem of running your split out business logic without it being triggered from a lambda. Do you just write boiler plate scripts that read the same JSON files, and set up command line parsing code in the same way as sam local invoke does to pass in params?
Then there's also the problem of wanting one of your non-Serverless services to invoke a lambda in development so you can actually test what happens when you call it in your main web app but instead of calling sam local invoke, you really want that service's code to be more like how it would run in production where it's triggered by an SNS publish message. Now you need to somehow figure out how to mock out SNS in development.
Unless I’be misunderstood, every knock against serverless above has actually been a knock against the complexity of having tiny, de-coupled cloud native services and how difficult it can be to mock... to which the answer is often “don’t mock, start by using real services” and then when that is less reliable or you need unit tests, then mock the data you expect. In the case of SNS, mock a message with the correct SNS signature, or go one layer deeper, stub out SNS validation logic and just unit test the function assuming the response is valid or invalid? In the case of Postgres, you could use an ORM that supports SQLite for dependency-free development but at a compatibility cost... worst case you might need to have your local machine talk to AWS and host it’s own LetsEncrypt certificate and open NAT port... but one can hope it doesn’t come to that...? Even so... that’s not exactly a knock against serverless itself, is it?
> In the case of SNS, mock a message with the correct SNS signature, or go one layer deeper, stub out SNS validation logic.
SAM already provides a way to mock out what SNS would send to your function so that the function can use the same code path in both cases. Basically mocking the signature. This is good to make sure your function is running the same code in both dev and prod and lets you trigger a function in development without needing SNS.
But the problem is locally invoking the function with the SAM CLI tool is the trigger mechanism where you pass in that mocked out SNS event, but in reality that only works for running that function in complete isolation in development.
In practice, what you'd really likely want to do is call it from another local service so you can test how your web app works (the thing really calling your lambda at the end of the day). This involves calling SNS publish in your service's code base to trigger the lambda. That means really setting up an SNS topic and deploying your lambda to AWS or calling some API compatible mock of SNS because if you execute a different code path then you have no means to test the most important part of your code in dev.
> In the case of Postgres, you could use an ORM that supports SQLite for dependency-free development but at a compatibility cos
The DB is mostly easy. You can throw it into a docker-compose.yml file and use the same version as you run on RDS with like 5 lines of yaml and little system requirements. Then use the same code in both dev and prod while changing the connection string with an environment variable.
> That’s not exactly a knock against serverless itself, is it?
It is for everything surrounding how lambdas are triggered and run. But yes, you'd run into the DB, S3, etc. issues with any tech choice.
So there’s an argument that the future deployment model is actually Kubernetes Operators, which means you could have test code that deploys and sets up AWS APIs... thus if your code responds to the trigger, it’s up to another bit of code to make sure the trigger is installed and works as expected against AWS APIs?
And yes, I think the problem here are APIs you use in multiple places but can’t easily run yourself in a production-friendly way. Until AWS builds and supports Docker containers to run their APIs locally, I don’t see how this improves... end to end testing of AWS requires AWS? ;-)
> I realized that local development was difficult because I was coupling my code to the delivery.
Of interest, I've spent some free time crunching on CNCF survey data over the past few months. Some of the strongest correlations are between particular serverless offerings and particular delivery offerings. If you use Azure Functions then I know you are more likely to use Azure Devops than anything else. Same for Lambda + CodePipeline and Google Cloud Functions + Cloud Build.
I think his point was that you should be able to run and test the Lambda code independently of Lambda. After all the entry point is just a method with some parameters, you can replicate that locally.
Yes, this is a great way of doing things - I have no problems TDD'ing business logic hosted in Lambda, because the business logic is fully decoupled from the execution environment. SAM should be for high-fidelity E2E integration testing.
This principle works with front end development too. Crazy build times on large applications can be alleviated if your approach is to build logic in isolation, then do final testing and fitting in the destination.
It’s hard to do this when surrounding code doesn’t accommodate the approach, but it’s great way to design an application if you have the choice. I really love sandboxing features before moving them into the application. Everything from design to testing can be so much faster and fun without the distractions of the build system and the rest of the application.
I felt your pain immediately and decided to write my own mini-framework to accomplish this.
What I have now is a loosely coupled, serverless, frontend+backend monorepo that wraps AWS SAM and CloudFormation. At the end of the day it is just a handful of scripts and some foundational conventions.
I just (this morning!) started to put together notes and docs for myself on how I can generalize and open source this to make it available for others.
stack is vue/python/s3/lambda/dynamodb/stripe but the tooling I developed is generic enough to directly support any lambda runtime for any sub-namespace of your project so it would also support a react/rails application just as well.
As a systems developer, comments like yours make me amazed at the state of web development. From the outside looking in, it seems like 10% code and 90% monkeying around with tooling and frameworks and stacks.
I believe it's the moment when there's a solution that just makes sense and works well for most people. A gold standard that other solutions will try to develop more and spice up, instead of reinventing it.
A lot of these DX (developer experience) concerns are, imo, rooted in what the article describes as "Vendor Lock".
Sure, you can write a bunch of tools to work around the crufty, terrible development environment's shortcomings. But ultimately, you are just locking yourself further & further & further in to the hostile, hard to work with environment, bending yourself into the bizarre abnormal geometry the serverless environment has demanded of you.
To me, as a developer who values being able to understand & comprehend & try, I would prefer staying far far far away from any serverless environment that is vendor locked. I would be willing & interested to try serverless environments that give me, the developer, the traditional great & vast powers of running as root locally that I expect. Short of a local dev environment, one meets both vendor lock in, & faces ongoing difficulties trying to understand what is happening, with what performance profiles/costs. I'd rather not invest my creativity & effort in trying to eek more & more signals out of the vendor's black box. Especially if trouble is knocking, then I would very much like to be able to fall back on the amazing toolkits I know & love.
aws's whole pitch has been cutting out server huggers & engineers, relying on aws. since day 1. often to wonderful effect. with far far better software programmability than our old crufty ways.
but lambda gets to the point where there is no local parity, where it's detached, is no longer an easier managed (remotely operated) parallel to what we know & do, but is a system entirely into itself, playing by different rules. one must trust the cloud-native experience it brings entirely, versus the historical past where the cloud offered native local parallels.
I never got the hand of cloud formation. I suppose it is nice from a visual (drag and drop) point of view, but I couldn't use it in production and moved on to manage my architecture with terraform.
It sounds like you're describing the Cloudformation template visualiser/editor in the AWS Console, which I have never heard of anyone using as the primary interface for their Cloudformation templates.
Personally for simple projects I've had pretty good experiences writing a Yaml-based template directly, and for more complex projects I use Troposphere to generate Cloudformation template Yaml in Python.
This is a really funny thing, since for the last ~10 years I've been hearing how we're deliberately doing IaC/CM tools "not a programming languages because reasons" (and thus have to do horrible hacks to support trivial things like loops and conditions), and now suddenly we're building libraries in programming languages that convert the code into non-programming-language description, which is then interpreted by a program into several other intermediate representations and finally emits trivial API commands. I guess the next step would be write a declarative language on top of CDK or Pulumi that will compile it into python which will generate CF/TF files.
I manage a handful of projects with Terraform and it works well in many situations. It has improved a lot recently but for a long time I really hated the syntax. I still do to some extent but have learned to cope with it most of the time.
If you are working on a project where all of your infrastructure will live on AWS I would definitely urge you to give it a second look. The amount of infrastructure I manage right now with a single .yaml file is really killer.
Yes, it (Python) was chosen because we could leverage existing internal code that was written in Python and it happens to be my strongest language.
If I could do it all over, I would still choose Python. That being said, I have been working professionally (building apps like this) for almost 14 years so my willingness to bite off a homebrew Python framework endeavor as I did here is a lot different than someone just getting into the field.
Django: avoid unless you have a highly compelling (read: $$$$) reason to learn and use this tool. I cannot think of one, honestly.
Flask: fantastic, but be conscientious about your project structure early on and try to keep businees-logic out of your handler functions (the ones that you decorate with @app...)
Sophisticated or more sugary Node.js backends are not something I have ever explored, aside from the tried-n-true express.js. I tend to leverage Python for all of my backend tasks because I haven't found a compelling reason not to.
Django is decent for POCs that need some level of security since you get authentication out of the box with no external database configuration necessary due to sqlite. Sometimes you have an endpoint that needs that due to resource usage, but the number of users is so low that setting up a complicated auth system isn’t worth it.
Minimalist frameworks are great for either very small (since they don’t need much of anything) or very large projects (since they will need a bunch of customization regardless).
In that regard, I think Django is kind of like the Wordpress of Python.
That is such a tough question to answer carte blanche.
All-in-all, Django is not bad software. I have a bad taste in my mouth though because as I learned and developed new approaches to solving problems in my career I feel like Django got in the way of that.
For instance, there are some really killer ways you can model certain problems in a database by using things like single table inheritance or polymorphism. These are sorta possible in Django's ORM, but you are usually going against the grain and bending it to do things it wasn't really supposed to. Some might look at me and go: ok dude well don't do that! But there are plenty of times where it makes sense to deviate from convention.
That is just one example, but I feel like I hit those road blocks all the time with Django. The benefit of Django is it is pre-assembled and you can basically hit the ground running immediately. The alternative is to use a microframework like Flask which is very lightweight and requires you to make conscious choices about integrating your data layer and other components.
For some this is a real burden - because you are overwhelmed by choice as far as how you lay out your codebase as well as the specific libraries and tools you use.
After your 20th API or website backend you will start to have some strong preferences about how you want to build things and that is why I tend to go for the compose-tiny-pieces approach versus the ready-to-run Django appraoch.
It's really a trade-off. If you are content with the Django ORM and everything else that is presented, it is not so bad. If you know better, you know better. Only time and experience will get you there.
That's great, cheers for that. It's helpful to know that your concerns are mainly to do with taking an opinionated vs non-opinionated approach - that's a framework for thinking about the choice between Django and (e.g.) Flask that many people (including myself) can hang their hat on.
On the flip side, not being able to use Django is one of the reasons against serverless for me. There's immense value in having a library for anything you might think of, installable and integratable in minutes.
You have to roll your own way too often in Flask et al, so much so that I don't see any reason to use Flask for anything other than ad-hoc servers with only a few endpoints.
Django gets you a lot if you have a traditional app with a traditional RDBMS and a traditional set of web servers. It’s too opinionated to easily map into AWS serverless.
Take a look at the [CDK](https://aws.amazon.com/cdk/) if you haven't already. It lets you define your infrastructure using TypeScript, which then compiles to CloudFormation. You can easily mix infrastructure and Lambda code in the same project if all you're doing is writing some NodeJS glue Lambdas which sounds like what you're looking for.
There's a couple of sharp edges still but in general it just 'makes sense'. If you don't like TypeScript there are also bindings for Python and Java, among others, although TypeScript is really the preferred language.
CDK made IaC accessible to me. I hated raw CloudFormation and never bothered with it because of that reason. I had a crack at Terraform, but never got passed the learning curve before my enthusiasm died.
Currently using some CDK in a production app and finally I found a way of doing IaC I actually enjoy.
You might really like pulumi. I'm kind of on the opposite end, ops>swe so tons of IAC and i'm using pulumi now as I'm more swe focused https://www.pulumi.com/ (ive no relation to them)
Basically exact same as CDK. I really prefer this style over CloudFormation and Terraform. I think Pulumi emerging as another player in the space legitimizes the approach.
CDK is moving quite fast and not all parts are out of the experimental phase, so there are breaking changes shipped often. I think in a couple of years it will stabilize and mature and become a very productive way of working with infrastructure.
GCP has https://cloud.google.com/functions/docs/functions-framework but I will not use it. I have found the best solution is to abstract away the serverless interface and create a test harness that can test the business logic. This adds some extra complexity in the code, but iterations are fast and do not rely on the overly complex and bug prone "platforms" like SAM and Functions Framework.
This is precisely what I do when I write code destined to be an AWS Lambda Function. It really feels like the only sane way to do it. It also makes it easy to mock the incoming request/event for integration tests.
Developer experience for serverless is such a pain point, spot on. AWS SAM has tackled some of the IaC modeling problem (on top of CloudFormation which is a mature choice) and they've had a crack at the local iteration (invoke Lambda function or API Gateway locally with connectivity to cloud services).
It's a little incomplete, missing some of the AWS IAM automation that makes local development smooth, environment management for testing and promoting changes, and some sort of visualization to make architecture easier to design as a team.
I work for a platform company called Stackery which aims to provide an end-to-end workflow and DX for serverless & CloudFormation. Thanks for comments like these that help identify pain points that need attention.
Yeah, I took a look at using a serverless framework for a hobby project, and it was just a real pain to get started at all, let alone develop a whole application in.
I tried AWS, and then IBM's offering which is based on an open source (Apache OpenWhisk) project, thinking that it might be easier to work with, but that was also a pain.
I just lost interest as I was only checking it out. For something constantly marketed on the ease of not having to manage servers, it fell a long way short of "easy".
> Yeah, I took a look at using a serverless framework for a hobby project, and it was just a real pain to get started at all, let alone develop a whole application in.
Look into Firebase functions. Drop some JS in a folder, export them from an index.js file and you have yourself some endpoints.
The amount of work AWS has put in front of Lambdas confuses me. Firebase does it right. You can go from "never having written a REST endpoint" to "real code" in less than 20 minutes. New endpoints can be created as fast as you can export functions from an index.js file.
And if you need a dependency that has a sub dependency with a subdependency that uses a native module prepare for poorly defined -fun- hell getting it to work. A surprising amount of standard is libs do.
Being able to throw up a new REST endpoint in under 10 minutes with 0 config is really cool though.
And Firebase Functions are priced to work as daily drivers, they can front an entire application and not cost an insane amount of $, per single ms pricing. Lambda's are a lot more complicated.
> Kind of surprised the article didn't mention lack of reasonable development environment.
I've been pretty happy with Cloudflare Workers.
You can easily define environments with variables via a toml file. The DX is great and iteration speed is very fast. When using `wrangler dev` your new version is ready in a second or two after saving.
I can report that Azure Function App development is at least pretty decent, as long as you have the paid Visual Studio IDE and Azure Function Tools (haven't tried the free version yet).
I tried AWS Lambdas a few years back and it felt way more primitive.
Azure Function App development experience is indeed pretty nice at least when using .NET Core. There are some issues, like loading secrets to the local dev environment from Key Vault has to be done manually and easy auth (App Service Authentication) does not work locally.
I've used Azure's serverless offering "Functions" quite a bit. The dev experience is pretty good, actually - it "just works" - start it and point your browser at the URL. And certainly no problems setting up env vars or anything basic like that.
My only nitpick, and only specifically relating to dotnet, is that config files and env vars differ between Functions and regular ASP.NET Core web apps. I think there is some work going on to fix that, but it's taking forever.
Couldn’t agree more, the dev experience was awful. You basically have to develop against public AWS services, my dev machine became a glorified terminal. They do seem to be iterating on the tooling quickly, but I wouldn’t use it again if I had a choice.
Edit: CloudFormation was also painful for me, the docs were sparse and there were very few examples that helped me out.
SAM templates are a subset of CloudFormation templates; that PDF could be three times as long and still not have the content I needed.
Yes there are examples, but there wasn’t one at the time that mapped to what I was trying to accomplish. Because, again, SAM templates are not one-for-one CloudFormation templates.
I found the community around SAM to be very limited. One of the many reasons I’ve moved to the Kubernetes ecosystem.
It definitely doesn’t have to be that way. I work on Firebase and I’ve spent most of the last year specifically working on our local development experience for our serverless backend products:
https://firebase.google.com/docs/emulator-suite
Still very actively working on this but our goal is that nobody ever has to test via deploy again.
Love firebase, thanks for your work! the local emulator suite is so important a feature and keenly following your progress.
Slightly OT... perhaps cheeky... any idea why firebase doesn’t provide automated backups for firestore and storage out of the box? Seems like a no brainer and a valuable service people would pay for.
I'm currently working on a little project backed by Firebase. Really interesting. Good to hear you're doing this - at my day job one of our key factors in choosing a technology is whether we can spin it up in docker-compose in CI and do real-as-possible ephemeral integration tests.
My experience is with .NET Core and the development experience is awesome... Dropped a $250/m cost down to ~$9/m moving it from ec2 to lambda's. Environment variables are loaded no differently between development and prod. Nothing is all over the place as it's almost 0 difference between building a service to run in Linux/Windows vs a Lambda.
Keep in mind Firebase has a big caveat. Firebase is great... for what it does. However, there's no way to easily migrate the Firebase resources to the larger GCP ecosystem. Firebase does what it does, and if you need anything else, you're out of luck.
Firebase is magic... but I never recommend it for anyone, until there's some sort of migration path.
[Firebaser here] that’s not quite accurate. For cloud functions they’re literally the same. Your Firebase function is actually a GCP function that you can manage behind the scenes.
With Cloud Firestore (our serverless DB) that’s the case as well. And Firebase Auth can be seamlessly upgraded to Google Cloud Identity Platform with a click.
However you’re right that for many Firebase products (Real-time Database, Hosting) there’s no relation to Cloud Resources.
Deploying is super slow. Usually it takes a minute or two, which is already quite long, but sometimes something goes wrong and then you can't redeploy immediately. You have to wait a couple of minutes before being able to redeploy.
To be fair, Firebase recently released a local development tool which alleviates the need to deploy on every change, but I haven't used it yet.
I'm a big Firebase user with Firestore and has been great...no not perfect and the "cold start" is probably the worst issue. However, deployments are easy, the GUI tool keeps getting better, like the extension packages, and the authentication system is quick to implement.
I found Amplify excellent to get up and running quickly. I’d highly recommend it for anyone without a well-oiled CICD setup who wants to quickly get a website up to test out an idea.
Unfortunately, I quickly hit the limits of its configurability (particularly with Cloudfront) and had to move off it within a few months.
At least on AWS, the "SAM" experience has been probably the worst development experience I've ever had in ~20 years of web development.
It's so slow (iteration speed) and you need to jump through a billion hoops of complexity all over the place. Even dealing with something as simple as loading environment variables for both local and "real" function invokes required way too much effort.
Note: I'm not working with this tech by choice. It's for a bit of client work. I think their use case for Serverless makes sense (calling something very infrequently that glues together a few AWS resources).
Is the experience better on other platforms?