Hacker News new | past | comments | ask | show | jobs | submit login
How to Generate Millions of HTTP Requests (2012) (dak1n1.com)
88 points by vinnyglennon on April 29, 2015 | hide | past | favorite | 25 comments



Another tool I have been using on my personal project is wrk [1]. It's pretty simple to use and can generate thousands of connections with multiple threads.

[1]: https://github.com/wg/wrk


There's a branch from Gil Tene (Azul) that fixes wrk for the coordinated omission problem, which he explains in the readme of that same repo[1].

[1] https://github.com/giltene/wrk2


There are plenty of different HTTP benchmarking applications for Unix OS' but I still have not found a good one for Windows. Anybody come across one?


I am under the impression that it's possible to compile Wrk for Windows [1], although I have not personally done so.

Incidentally, I've not read the linked article, but skimmed the first few paragraphs. The numbers of requests per second that he was able to generate with various tools seem remarkably low. ApacheBench is the least performant load generator since it's only single-threaded, but it can generate tens of thousands (about 25,000) requests per second on a i7 desktop machine. Wrk can generate significantly more since it uses all CPU cores of the load-generation machine.

[1] https://github.com/wg/wrk/issues/49


Check out Gatling[1], which is based on Scala, Akka, and Netty, and last time I checked works on Windows.

The only thing missing would be an out-of-box solution for distributed load generation, which I believe is being developed. But today, you can use a 'scale-out' approach[2] which gives you the ability to combine the data from multiple Gatling instances into a single report, but as a post-process step.

[1] http://gatling.io [2] http://gatling.io/docs/2.1.5/cookbook/scaling_out.html


A paid service, but the best I've seen is https://blitz.io. You can choose region(s), number of requests, multiple endpoints, and lots of curl like options.


I used to be a big fan of Blitz.io, and used it a lot for benchmarking my client's services. However they changed their payment model so the only option is to sign up for a monthly subscription, which does not fit in with my use case.

Does anyone know of an alternative that uses a pay-per-use option? In the meantime I have reverted to Bees with machine guns.


https://loadimpact.com/ seems to have pay-per-test as an alternative to subscriptions.


Here's one to spin up some temporary EC2 instances: https://github.com/newsapps/beeswithmachineguns


> One important thing to keep in mind when load-testing is that there are only so many socket connections you can have in Linux. This is a hard-coded kernel limitation, known as the Ephemeral Ports Issue. You can extend it (to some extent) in /etc/sysctl.conf; but basically, a Linux machine can only have about 64,000 sockets open at once. So when load testing, we have to make the most of those sockets by making as many requests as possible over a single connection. In addition to that, we'll need more than one machine to do the load generation. Otherwise, the load generators will run out of available sockets and fail to generate enough load.

If the # of ports is the bottle neck, you can configure more IPs on a single host.


ipv6


Disclaimer: I’m talking about my startup :)

StormForger (https://stormforger.com) is cloud based Load Testing as a Service.

If you like to create test cases using a JavaScript DSL, run load tests and performance analysis and let us take care of all the provisioning and data analysis stuff: feel free to sign up for our private beta. Just drop us a line if you have further technical questions.

Talking about big numbers in this thread you may want to read this thread as well: https://news.ycombinator.com/item?id=7920930


One tool I used a lot recently is vegeta [1]. It can be used both as a cli and a go library. You can generate html (with nice plotting), csv or json reports and launch distributed attacks.

[1] https://github.com/tsenart/vegeta


Here's another #golang http benchmarktool which is pretty cool. It's pretty much like ab / httperf with the benefit that prints an histogram of the requests in the end which is very useful.



I used Siege a few years ago for some benchmarking and was not impressed. It's a pity because I really like the interface, but Siege is technically lacking.

There is a warning in the configuration file against using keep-alive with no explanation as to why. That makes it useless when you need to benchmark persistent connections.

Each Siege "user" (i.e. client) has it's own thread, which means they consume a large number of resources. Depending on your CPU, you will also hit a point where more time is spent switching between threads than actually sending requests. I wasn't able to exceed 400 users on my 2011 test machine. That makes it useless when you need to benchmark large numbers of connections.


I've been a fan of siege, but recently I had to test an app under very high load and I had big trouble getting siege to do that. The default ubuntu install has some hard limits on file pointers and compiling it from source seemed to segfault all the time. I ended up switching over to httperf.


Has anyone used [faban](http://faban.org/)?


Apparently, if you want millions of requests per second, your app has to be built on Erlang ;)


+1 to JMeter, you can create complex http requests to a site simulating user behavior.


If python aint problem, I would suggest using locust.io


On an unrelated note, that website looks so 2011!


loader.io (disclaimer, it is a product of a division of the company with which I work).


Good series of posts but can we get a [2012] tag on this?


Sure; done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: