Hacker News new | past | comments | ask | show | jobs | submit login

I've used it a bit in production. Our use case avoided a lot of the potential issues you mentioned, so it may not be entirely helpful:

* serialization: the input was passed a json string argument. the output was a file uploaded to S3, so just the URL was returned, again in a JSON string * global variables: the program was quite self-contained: there was an initial state setup that was not mutated afterwards. So RQ's fork-exec model(the default) worked well enough

Sorry I don't have much to say about performance and scaling. It was quite fine for our needs, and we could scale horizontally upto a certain point by just starting extra processes, and beyond that with more VMs. Since they all listened on the same RQ, it worked fine. (the number of items in our queue never really hit any of Redis' limitations either)

RQ lets you customize the worker model: so you could for instance use threads instead of processes.

Regarding monitoring: there's RQ dashboard[1] which gives a nice web interface to view jobs, failures, and restart them.

1: https://python-rq.org/docs/monitoring/




Thank you very much for the detailed response :) I really appreciate the thoroughness; it convinced me to use RQ.


+1 datapoint in favor of rq: https://twitter.com/sdan_io/status/1285687026386444288 was built with it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: