Having a redis job queue is extremely standard, especially for web app development, regardless of language. For one thing if the web server crashes for any reason the jobs still continue processing and also you have a log of jobs in case they fail etc
Are people using it for reliability though? Are they running redis in a mode where it persists a journal? If not, then if redis crashes for any reason, you're in the same situation.
And, like, Mastodon apparently uses a queue to do things like send new user registration emails. Why not just send the email from the new user request handler? Then if there's an error, you can tell the user in the response instead of saying "okay you should get an email" and then having it go into the ether. I was under the impression this had something to do with not wanting to tie up the HTTP worker because you want it to quickly get back to doing HTTP requests, but if it can concurrently process requests, there's no issue.
Similarly they have an ingest queue for other federated servers sending them updates. But if things are fast, why wouldn't they just process the updates in the HTTP handler? You don't need a reliable queue because if e.g. you crash, the other side will not get their HTTP response, and they'll know to retry.
It may just be out of habit and not any underlying language reasoning. Things like sending emails or doing anything but simple database operations make sense to do in a queue. For instance I’ve worked at multiple places where we did this using celery and python or bullmq/javascript. Some of them we did have a log that persisted for a certain amount of time so we could rerun e.g. emails that never got sent