Hacker News new | past | comments | ask | show | jobs | submit login

Hello all! I built Posthook as a simple solution for web applications to schedule one-off tasks. It lets you schedule a request back to your application for a certain time, with an optional JSON payload.

It can be an alternative to running your own systems like Sidekiq, Celery, or Quartz and the operational overhead that comes along with them. Cron jobs and cloud provider tools like CloudWatch Events are also used for job scheduling but lack observability and may force you to frequently make expensive queries to your data store just to see if there is any work to do.

Questions and feedback are greatly appreciated :)




Neat! We've been doing something like this at work, and another simple solution (curious if your Google cloud infrastructure is built on something similar) is Azure Service Bus queues with a function app trigger.

Unlike AWS SQS, Azure SB queues can have items scheduled to be queued at an arbitrary time, see https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure..... SQS can only delay up to 15 minutes, so you have to implement your own hack to schedule for later than that.

And also unlike SQS, they can trigger a function so you don't have to deal with the logic to listen yourself - or even the transactional message handling, if the function fails the message is automatically re-queued.


Aws just announced "ECS Daemon Scheduling" https://aws.amazon.com/about-aws/whats-new/2018/06/amazon-ec... It sounds promising.


Thanks! I do heavily use Google Cloud Pub/Sub but it does not support scheduling items so I only schedule items when they are due for processing.


Wouldn't it be good to have your own pub/sub mechanism instead of relying on a third party like gce for a critical component. Take for example - https://github.com/grpc/grpc/issues/13327. It took them fair amount of time to fix that issue.


Interesting... AWS leaked that subscriptions to SQS are in the works, but it's good to see that Azure has that already.


> AWS leaked that subscriptions to SQS are in the works

Interesting, where did you hear that?


I'm unclear about the part about expensive queries to the data store. What cron/celery/quartz job would require a check to the data store for a check that wouldn't need to be done with posthook? It seems like if work depends on a decision made with data from my data store, that doesn't change based on what I use as a timer for the task. I'm not sure that I see a clear value-add here.


With a recurring job say every minute, the query triggered by the job to send out event reminders would be something like "get me all the events starting in one hour." With Posthook, you are able to start jobs only when needed and the query changes to something like "do I still need to send out a reminder for this event id."


Hey cgenuity, Great to see the simplicity, I am working on a very similar product, launched couple of days back on HN (https://send.rest). Best of luck.

But have added couple of extra features in send.rest

1) Calling external APIs ( Think : pull data from Facebook while running this task in future and post data on my API or send SMS )

2) Reports and retries ( Try 3 times or send me sms its failing )

3) Recurrence ( Call this task every Monday 9:00 pm )

4) SMS and Emails comes along with them ( Send SMS or Email , pick any service as your backend )

I guess without some extra features it will be difficult to get a market acceptance.


Thank you, good luck to you as well!

I've found that with scheduling tasks for the future it's important to do a final check before fulfilling the action, whether it's sending an email, push notification, etc. You don't want to send a reminder for an event that has been deleted, for example. That is why I have decided to keep the scope small and let developers make the final decision there.

Reports and retries are definitely things that I think add value though, and I have plans to expand on that.


I also think there might be a market for this kind of product .. Great to see more people think on the same line .. Got some paid signups from last week also ..

You might be right about email/push, but it certainly comes handy if minimal config is required.


There are already mature products in this market, e.g. RunMyJobs: https://rmj.redwood.com/

This started out as a distributed cron over 20 years ago, available as a saas product for a few years already.


With send.rest and using hook, can I specify a POST and POST data and custom request headers?


send.rest supports HAR1.2 calls, you can technically call any API


Thank you for this. I've been on the hunt for this exact service for awhile now but a lot of the options were lacking. Excited to give this a try.


Thanks! Happy to provide value, feel free to use the live chat or the support email for any specific questions or requests you might have.


Does Posthook support recurring scheduling (i.e. every 15 minutes)?

Also, is there retry logic? I.E. 3 retries, delay 60 seconds between retries?


It does not support recurring scheduling at the moment.

Right now the retry logic is just one retry 5 seconds after the first failure. At which point the hook gets set to a failed status and failure notifications get sent out. Retries are tricky because depending on how the job is implemented they can cause more harm than good. So I plan to refine that more based on customer feedback.


> Retries are tricky because depending on how the job is implemented they can cause more harm than good.

Ah, yes, there's nothing like bringing capacity back online only to have it crushed by all your customers retrying at the same time.

AWS got bit hard by this[3] but there's a blog post[1] about it, which is linked to by the docs for their client software[2].

[1] https://aws.amazon.com/blogs/architecture/exponential-backof...

[2] https://docs.aws.amazon.com/general/latest/gr/api-retries.ht...

[3] https://aws.amazon.com/message/5467D2/ ... basically DynamoDB is a fundamental service for AWS and had implemented some new streams features. This all appeared to be working, but they were running closer to capacity than intended, and when a cluster went out this caused a cascading failure.

And see this which linked to that RCA: https://blog.scalyr.com/2015/09/irreversible-failures-lesson...


Seems like allowing the user to specify retry logic (if any) covers the "tricky" pieces. Let the user define the number of retries and delay between.


Agreed, thank you :). Added to the board.


What happens when you have a service outage? Do you directly mark the hooks as failed, or do you retry once after your service has been restored?


If the service outage is on Posthook's side, they would be retried.

If the outage is on the customer's side, all hooks that were attempted during the outage would be marked as failed. I plan on adding a feature that will allow the developer to fire off again all failed hooks in a given time period.


I just posted a scalable service that does support recurring scheduling ( every 15 minutes, as fast as every second, or long as every n months ). Show HN here: https://news.ycombinator.com/item?id=17353486


I use cron-job.org for recurring hooks. It's not as developer friendly as posthook but it gets the job done.


What is the absolute easiest way to do this on your own machine?


systemd timers most likely. You already have systemd and unit configurations are small.

For dynamic stuff using normal systemd templates should work.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: