Been using it recently to try and run a large serverless app offline for dev purposes, it kind of works but the experience is ok...not great.
They seem to be piling all their energy into creating mocks for new (paid) services when it might be worth consolidating as the original mocks have a lot of issues.
- Documentation is non-existent, expect to trawl through github issues to work out how something works as it’s quite opaque (api gateway invocation)
- CloudFormation implementation is completely broken (no intrinsic functions) so unless your stack is simple you’re pretty much required to use any AWS api based devops tools (e.g. Terraform)
- API’s are not fully complete, means terraform either breaks on redeploy when it tries to get a resources status or at best case triggers redeploy of certain resources each time (the most exotic thing we’re using is SNS)
- Test suite is...light, seen a few things go through their CI and break it
- There’s a bit of non-consistent behavior - you’ll set an env var and find it’s not implemented for a certain case and be left scratching your head
- Expect to have to make pull requests yourself to fix things
This isn’t a winge, I understand it’s partially open source and you can just fix the issues yourself when they come up like we are doing. But just a heads up for who may naively look at it and think it’s a silver bullet...you’ll have to go through all these steps.
According to the docs, it has local fault injection (I.e. you can tell it to respond with “resources unavailable / exhausted”) which makes it “even better than the real thing” for development. (But have never used it so cannot comment about how well that works).
I usually prefer to develop against as-live-setup-as-makes-sense, because test environments often miss something critical; however, it is often hard to verify (on both test and live systems) error condition behaviour - to debug response to S3 errors or resource exhaustion, you actually need them to happen.
Looks like this project does provide for testing these conditions. Neat.
I figured I would chime in. I've used LocalStack to "mock" AWS in CI for my two most-recent employers and worked without a hitch. Mocked DynamoDB, SQS, Kinesis and SNS without an issue. There was a single gotcha about the way that the FIFO queues needed to be named and it also didn't accept a default parameter for something relating SQS but other than that it was smooth sailing.
As an occasional user the biggest gripe I have is that credentials are not supported - in our current code base the connection to AWS is done by specifying the region and credentials and leaving the actual endpoint discovery to be done automagically. When using LocalStack you have to instead explicitly specify the endpoint of the service that you want to connect to.
What this means is that if I want to test something locally its not as simply as spinning up the docker container - I also have to make a code modification for the service(s) that I want to interact with.
It's not a big deal - and the benefits of being able to test locally outweigh the minor inconvenience but it still bugs me every time
I used localstack a while ago to test DynamoDB queries then ended up switching over to DynamoDB-local. But I'd often come back to this to see how this fairs with some of the lambdas I have and most of the time there is something missing in the implementation. Still great to see this being in development and I hope that some of these gaps get closed soon. I wonder how the Pro version compares to the free one.
It is not a fully functional local AWS stack. localstack uses moto to mock requests to AWS endpoints, and that does not support 100% of AWS's endpoints.
Agreed. It's more like an 85% functioning local version of the 80/20 of AWS. It doesn't cover all services but it is fairly comprehensive and does cover almost all the major ones.
Overall it's a really cool project. I had some networking issues running it inside Docker though. When I would place a message on an SNS topic it would come back with an exception triggering my exception handler, however it actually did send the message to the topic. Couldn't work it out and left it. Might revisit it later but that was an annoyance for me.
I tried using it to do some kind of integration testing with my AWS services (RDS, API Gateway, EC2, etc). I decided that was too much work, and since it'd force me to dockerize much of my application, I might as well use FOSS solutions that I can already run in docker. Easier to test, and cheaper.
I've always wanted to try LocalStack, but I worry that I'll be 90% there and then find out it doesn't support some key piece. AWS services is huge surface area.
Or I'll be using it fine but then I want to use shiny AWS feature X.
Terraform is in the same boat; though I've found the community keeps the functionality very up-to-date.
I tried it a year or so ago when I was first starting AWS serverless dev. IIRC it supports very few services. Most of our lambdas were behind API gateway and a Cognito authorizer but localstack didn't have a Cognito user pool. We botched something together with nginx to replace CloudFront for a while.
I don't remember quite what the last straw was, but there were big gaps in what we needed. Mind you, I keep finding I'm just one more AWS service away from what I need with the real thing do I'm not sure that game ever ends.
Not sure if you’re suggesting using local stack for production but if you are, I’m not sure that’s it’s intended purpose.
This the first time I’ve ever heard of local stack, but just judging by their website, they don’t seem to be trying to replace AWS for production. They are just trying to offer devs a local testing environment.
Absolutely, I'm not sure about the poster's intent, but his concerns still apply to using it for developer testing
If your application uses features that don't work in LocalStack, it can be very tricky to start using LocalStack to test that application. Conversely, if you are already using LocalStack to test/develop your application, that may discourage you from using AWS features which might be a good fit, but aren't supported by LocalStack.
Also, I would certainly recommend against solely testing with LocalStack, since you may run into situations where it behaves slightly differently than AWS, and if you develop targeting LocalStack's behavior, you could end up with bugs when AWS behaves differently in production.
I'd really strongly recommend not using this in most scenarios. Localstack sits in an uncomfortable no mans land between a mock and the real thing. Most of the functions are stubbed via Boto/Moto, some services run off separate Docker images, while some missing functionality of Moto is filled in.
I can't really think of a situation where this is better than mocking, because of the subtle differences that introduced here.
I wonder what's the common approach that people use to test AWS-coupled features locally without LocalStack (or test features not supported by LS). Do you have a dedicated AWS environment for this use case, or we're not supposed to run them locally (e.g implementation is mocked on local, and AWS-coupled features are tested on a remote test env)?
I use this for DynamoDB and SQS mocking -- it's so convenient. Dockerized, too, so you're one `docker-compose.yml` from using it in CI.
The development is not all in-house (and I count this as a definite strength of the project). It brings together a number of other AWS mocking libraries.
I've had this on my radar for a while, and I'm now on a project where it could have a role. Does anyone have any recommendations or strategies to follow?
Yeah, LocalStack seems to be a similar idea, with more support for the "new stack" (e.g. lambdas) and a bigger focus on testing/mocking AWS during dev rather than replacing AWS for production workloads.
edit: Extra URL