Hacker News new | past | comments | ask | show | jobs | submit login

Great write up. I think you have clearly laid out the things that a newcomer to the space should know about queues. Bookmarked!

Talking to a bunch of engineering teams I found that some use case for queues are very generic (almost identical use case and implementation across teams). Specifically webhook handling is something that keeps coming up. We've been working for a few months of a queue that's specifically for ingesting and delivery of webhooks. Do you see a future for use case specific queueing systems instead of defaulting to a general purpose queue?

In our case we abstract the actual implementation and behave more like you would expect a standard webhook.

For reference, it's https://hookdeck.io




The way I’ve handled incoming webhooks is to run a Lambda+API Gateway to put them into an SQS queue, and the inverse, SQS to Lambda, for sending webhooks out.

This was possible only because AWS provides these services, of course. If you’re offering an infinitely scalable HTTP endpoint to soak up webhooks and allow me to query them at my leisure, or put them into a queue for me, that would be useful.

I haven’t looked into hookdeck in detail yet, will post again once I do.


That's essentially it.

We've heard from teams having issues dealing with large uncontrollable spikes from their webhook providers and we can smooth out that out. There's additional benefits that can be introduce before it gets to your own infra such as verifying signatures, filtering events, etc.

API Gateway + SQS + Lamda is definitely a common and good approach. My understanding is that you often start running into into other problems. Hitting DB connection limits from serverless invocation is a recurring one! I'm hoping we can make the troubleshooting / replayability easier as well.

Thanks for sharing your approach and opinion! Hoping to hear more!


Are you aware of any nice articles or open source projects for setting up the ability to dispatch webhooks events? I am currently thinking of using GCP PubSub for it and have it consumed by consumer (GCP Cloud functioon?) which does the network request, and requeues it back to the topic when its non 200 response. If it keeps failing 10 times it will get send to a dead letter queue.


Not off the top of my head, no. Your plan sounds good except for the "and requeues it back" part. Ideally you should just ignore failure (don't acknowledge/delete) and have the queue control plane decide when and how to retry—unless you have special logic (exponential backoff?) around that. If you do need to re-queue, just make sure you re-queue before you delete/ack the current message, otherwise you might lose jobs.


Is that something you would be interested in a hosted solution for? We've built the infrastructure to deal with incoming webhooks but we've been exploring the idea to also leverage it for dispatching. Turns out the same infrastructure works both ways!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: