around there & upwards. You've listed one of the big problems with rabbit @ volume - inevitably/unavoidably you are going to have consumers go down or so slow. At a high enough volume you're heading for a crash/partition quickly if you cant respond fast enough (where "fast enough" is a time window inverse to how high volume the queue is). Its a crappy failure mode to have a sword hanging over you like that.
other log-based messaging technologies like kinesis, kafka, etc do not care if a consumer goes down & are thus much safer.
In my case it would have been an issue regardless of what the queue/pubsub tech was (talking on-prem, not GCP Pubsub or AWS, which would just chug along effortlessly, not care, and take our money), since the entire consumer stack was toast and dumping unprocessed data was a no-no. The real issue there was my manager not having a spine coupled with not allowing my team to do its job autonomously. Even with the wonky setup we had it would have been dead simple to chain additional clusters. Stupid but easy with the automation we had built.
However, adding another Kafka or Pulsar node would have been much easier.
other log-based messaging technologies like kinesis, kafka, etc do not care if a consumer goes down & are thus much safer.