Hacker News new | past | comments | ask | show | jobs | submit login
Locust – Open-source Load Testing Tool (locust.io)
107 points by dedalus on Jan 6, 2021 | hide | past | favorite | 21 comments



Yep, I like Locust but been a while since I’ve needed it again.

It’s main advantage over simpler tools like ab or httperf is being able to simulate most any kind of user interaction. Whereas other tools tend to just spam static predefined HTTP requests at a server (which is great when that’s all you need), Locust is great at simulating complex user behavior, such as users clicking through different pages of your site, or clients that make common sequences of API requests, perhaps to a variety of servers at different urls.

In my case, I was load testing a RESTful DNS management api over http and, as part of the same performance test, simulating DNS traffic to nameservers managed by that API. Locust is horizontally scalable and incredibly flexible since you can write arbitrary Python code to make it do whatever you want.


> In my case, I was load testing a RESTful DNS management api over http and, as part of the same performance test, simulating DNS traffic to nameservers managed by that API. Locust is horizontally scalable and incredibly flexible since you can write arbitrary Python code to make it do whatever you want.

This sounds really interesting and like good engineering (tm), would you mind sharing which company this was for? Maintaining good blackbox e2e tests and the infrastructure is one thing, finding time (and space in the roadmap) to do that and load test on top of that is a great engineering indicator IMO.

I wonder if it would be possible to translate regular recorded user activity (assuming the API was instrumented) for it and find some way to randomize & generate it (a la quickcheck) and re-feed it into the system. There are some sticky points like pre-existing state (if you take the recording of any given day), but you could probably discover/evaluate that from API requirements (if you saw a `GET /resources/1` which succeeded, then obviously a `POST /resources` must have occurred before hand).


I’m not sure if this is what you’re looking for, but me and my colleagues created a tool (https://github.com/zalando-incubator/transformer) to convert actions recorded in a browser (in har format) to Locust scenarios. Recorded state (like cookies) can be programmatically removed by user-defined plugins.


Whoa this looks awesome, thanks -- more awesome tech out of Zalando (I'm familiar with patroni/postgres-operator)!


> This sounds really interesting and like good engineering (tm), would you mind sharing which company this was for? Maintaining good blackbox e2e tests and the infrastructure is one thing, finding time (and space in the roadmap) to do that and load test on top of that is a great engineering indicator IMO.

You’re really hearing about a unique case. The engineering org there had basically no consistency across teams/projects. Every product team was independent and responsible for their own services. So different teams deployed their services to different infrastructure with different programming languages, different frameworks, different databases, different monitoring and logging solutions, and so on.

I was in the QA org and was embedded as an SDET on a particularly well-functioning product team and had a lot of independence to solve problems as I saw fit with support from great forward-thinking devs. I got their e2e/integration tests to a point where it was super easy for devs to add at least some new tests themselves, which freed up some of my time to slowly build out a performance test.

But, very few other teams were getting to the point of doing any non-trivial load testing. I tried evangelizing Locust a little bit, but there wasn’t enough demand for it without a turnkey solution - something a bit hard to make without much consistency across teams. Not to mention the culture and processes weren’t there to consolidate on a single solution anyhow.

So, grass is always greener. That company’s engineering org has since been decimated by layoffs and outsourcing unfortunately.


oof, thanks for sharing -- was hoping it wasn't this but looks like this:

> I was in the QA org and was embedded as an SDET on a particularly well-functioning product team and had a lot of independence to solve problems as I saw fit with support from great forward-thinking devs. I got their e2e/integration tests to a point where it was super easy for devs to add at least some new tests themselves, which freed up some of my time to slowly build out a performance test.

is hard to find sustainably happening in the wild.


We've been successfully using locust for automated performance and load tests for a couple of years now. Locust is very flexible, the community is great and python-based test implementations are big win for engaging our quality engineers in test design and implementation.

Since our platform (Appian[0]) has a pretty heavy/complex client, we built a thin abstraction library [1] to empower both our quality engineers as well as some of our more advanced customers to write simple automated performance tests.

[0] https://www.appian.com [1] https://pypi.org/project/appian-locust


Tag1 Consulting has been working on a locust-inspired implementation in Rust for some time now, called Goose [1]. They blog about their work regularly.[2] Has anyone given it a try? I'm not affiliated with them but appreciate the open source effort.

[1] https://github.com/tag1consulting/goose

[2] https://www.tag1consulting.com/blog/real-life-goose-load-tes...


FYI - the screenshot on the homepage is outdated, and "Slaves" has been renamed to "Workers" according to the changelog.


Another open source tool that does something similar is https://k6.io/ which I have been using recently. Tests are written in JS and you can define thresholds such as % errors, P95 and P99 response time for specific request components e.g. blocking, receiving, waiting, etc. There is also a SaaS product connected to it for doing tests from locations around the world.


I'm one of the people behind k6 - thanks for mentioning it :)

As for Locust, I've been testing and reviewing a whole slew of load testing tools, and blogging about them, and Locust is definitely a favourite of mine. It used to perform badly a couple of years ago, but with the new FastHTTPLocust class its performance is quite OK and it's the only tool that allows you to write test code in regular Python (there is another tool - the Grinder - that also lets you write in Python, but it is Java-Jython which means no pypi libraries). If you're into Python you should definitely try out Locust.


There was a “load testing is hard” kind of discussion about 3 days ago.

It might be worth linking here.




Another load testing conversation from 2017 https://news.ycombinator.com/item?id=15733910


I have used gatling in the past which seems to be quite similar. Gatling is written in Scala so the Scripts are statically typed and compiled but other than that those 2 seem to do the same thing.

Does someone have a comparison between the two?


This has been my workhorse when I needed to load test: https://github.com/processone/tsung, it is still maintained.


Tsung is very solid, and high-performing. It lacks real scripting though - you can't write test cases as code.


Been using it for a while ! Really easy to get going and does everything I need.


I didn’t like it. We had to do to much work to reconfigure between runs. We went back to ab


What kind of reconfiguration between runs?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: