Hacker News new | past | comments | ask | show | jobs | submit login

Right now it doesn't store any consumer offsets. And you can get either at-most-once or at-least-once guarantees.

But I found the idea of multiple consumer groups per queue very interesting. So basically you would still be able to fetch queue messages as you can do now and it will delete dequeued items, but you would also be able to use something like 'get queue_name:consumer_name' and it will create a consumer group internally with a stored offset and will serve messages using that offset. In case of reliable read failure each consumer group will keep it's own queue of failed deliveries, will check that queue and serve these failed items first. If source queue head has changed and became larger then consumer group offset, then consumer group offset would just start from the source queue head.

This way you can get Kafka-like multiple consumer groups per queue as an additional feature.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: