Hacker News new | past | comments | ask | show | jobs | submit login

Well, if you have external API calls in your Django app and you are running sync (which I would absolutely advice, with running async it is really easy to get an unpredictable performance which is sometimes hard to track down) having the ability to run some views async is really crucial.

Otherwise your application might me humming along smoothly at some point and coming to a sudden complete standstill or performance plummets when a random external API endpoint starts to time out. Yes I have been bitten by this :-)

To fix this while running sync I have dedicated separate application processes for the views that do external calls, but this makes the routing complex. Alternatively you can juggle timeouts on the external API calls but this is hard to get right and you need to constantly keep track if calls are not timed out just because the external endpoint is a bit slower at some point.

So I think this solves a very real-world challenge.




> plummets when a random external API endpoint starts to time out

You should add something like https://pypi.org/project/circuitbreaker/

Continuously failing external requests should not make each one of your responses slow.


Interesting, will certainly try it out, thanks!

> Continuously failing external requests should not make each one of your responses slow.

It is not really a matter of the responses becoming slow, the problem is that if you are running sync with i.e 6 application server processes and you have just 6 hits on an endpoint in your app that is hung up on an external API call your application stops processing requests altogether.


Exactly this. I see the whole django async stuff being far more relevant for applications with lots of traffic or high request rates where you are already running on beefy infrastructure with a ton of workers and any small improvement in performance translates into huge real world cost savings. Your standard blog, no so much.


Isn't that why gunicorn(+gevent) was implemented and does the switching behind the scenes w/o waiting that api call to finish? Is there a good reason I should manually "await" network calls from now on?


Yes; gevent does also fix this problem. But it also gives you a lot of new problems when running all requests async. In my experience mostly with views that (in some specific calls, i.e for a specific customer) keep the cpu tied up, for example serializing a lot of data. Random other requests will be stuck waiting and seem slow while it is a lot more difficult to find out which view is the actual problem.

I have deployed applications both under gevent and sync workers in gunicorn and would personally never use gevent again, especially in bigger projects. It makes the behavior of the application unpredictable.


How does async/await solve this? I would have thought it has exactly the same problem?


It does have the same problem. Standard library CPU-heavy functions are not generally async-friendly. You'll be stuck blocking for that 500kb JSON file to serialize.


I've been looking at whether this would be appropriate for something like server-side mixpanel event tracking. Or for sending transactional emails or text notifications using a 3rd party service like mailgun or twilio.

From what I can tell it is not intended for that purpose, and outright will not work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: