Hello all! I built Posthook as a simple solution for web applications to schedule one-off tasks. It lets you schedule a request back to your application for a certain time, with an optional JSON payload.
It can be an alternative to running your own systems like Sidekiq, Celery, or Quartz and the operational overhead that comes along with them. Cron jobs and cloud provider tools like CloudWatch Events are also used for job scheduling but lack observability and may force you to frequently make expensive queries to your data store just to see if there is any work to do.
Neat! We've been doing something like this at work, and another simple solution (curious if your Google cloud infrastructure is built on something similar) is Azure Service Bus queues with a function app trigger.
Unlike AWS SQS, Azure SB queues can have items scheduled to be queued at an arbitrary time, see https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure..... SQS can only delay up to 15 minutes, so you have to implement your own hack to schedule for later than that.
And also unlike SQS, they can trigger a function so you don't have to deal with the logic to listen yourself - or even the transactional message handling, if the function fails the message is automatically re-queued.
Wouldn't it be good to have your own pub/sub mechanism instead of relying on a third party like gce for a critical component. Take for example - https://github.com/grpc/grpc/issues/13327. It took them fair amount of time to fix that issue.
I'm unclear about the part about expensive queries to the data store. What cron/celery/quartz job would require a check to the data store for a check that wouldn't need to be done with posthook? It seems like if work depends on a decision made with data from my data store, that doesn't change based on what I use as a timer for the task. I'm not sure that I see a clear value-add here.
With a recurring job say every minute, the query triggered by the job to send out event reminders would be something like "get me all the events starting in one hour." With Posthook, you are able to start jobs only when needed and the query changes to something like "do I still need to send out a reminder for this event id."
Hey cgenuity,
Great to see the simplicity, I am working on a very similar product, launched couple of days back on HN (https://send.rest). Best of luck.
But have added couple of extra features in send.rest
1) Calling external APIs ( Think : pull data from Facebook while running this task in future and post data on my API or send SMS )
2) Reports and retries ( Try 3 times or send me sms its failing )
3) Recurrence ( Call this task every Monday 9:00 pm )
4) SMS and Emails comes along with them ( Send SMS or Email , pick any service as your backend )
I guess without some extra features it will be difficult to
get a market acceptance.
I've found that with scheduling tasks for the future it's important to do a final check before fulfilling the action, whether it's sending an email, push notification, etc. You don't want to send a reminder for an event that has been deleted, for example. That is why I have decided to keep the scope small and let developers make the final decision there.
Reports and retries are definitely things that I think add value though, and I have plans to expand on that.
I also think there might be a market for this kind of product .. Great to see more people think on the same line .. Got some paid signups from last week also ..
You might be right about email/push, but it certainly comes handy if minimal config is required.
It does not support recurring scheduling at the moment.
Right now the retry logic is just one retry 5 seconds after the first failure. At which point the hook gets set to a failed status and failure notifications get sent out. Retries are tricky because depending on how the job is implemented they can cause more harm than good. So I plan to refine that more based on customer feedback.
[3] https://aws.amazon.com/message/5467D2/ ... basically DynamoDB is a fundamental service for AWS and had implemented some new streams features. This all appeared to be working, but they were running closer to capacity than intended, and when a cluster went out this caused a cascading failure.
If the service outage is on Posthook's side, they would be retried.
If the outage is on the customer's side, all hooks that were attempted during the outage would be marked as failed. I plan on adding a feature that will allow the developer to fire off again all failed hooks in a given time period.
I just posted a scalable service that does support recurring scheduling ( every 15 minutes, as fast as every second, or long as every n months ). Show HN here: https://news.ycombinator.com/item?id=17353486
It can be an alternative to running your own systems like Sidekiq, Celery, or Quartz and the operational overhead that comes along with them. Cron jobs and cloud provider tools like CloudWatch Events are also used for job scheduling but lack observability and may force you to frequently make expensive queries to your data store just to see if there is any work to do.
Questions and feedback are greatly appreciated :)