"As a Django developer, there wasn’t a straightforward and obvious way to just do things in the background on a page request. People suggested I try Celery, but I didn’t like that option at all. A distributed task queue? What? I just want to do something in the background without making the user wait; I don’t need some super comprehensive ultimate computing machine. The whole notion that I would need to set up and configure one of these supported brokers made my spidey sense tingle"
It's not really that hard. I just latch on to a broker that I'm already using elsewhere in my stack (Redis). Celery makes it super simple to run a command in the background.
I just want to do something in the background without making the user wait
What's more, the strategy of just spawning a thread to do with async processing doesn't scale. Once you hit the limit of requests that a machine can process, you'll need a distributed work queue anyway. Or do Go coroutines run through some sort of managed execution queue?
Celery is one of the easiest things to set up. It really doesn't get much simpler. Your configuration is living in your django project as python source code, and `celeryd` is no-fuss.
well, the complaint isn't entirely that it's hard, per se, but that there are many different ways to do things concurrently in Python, and that it presents too many choices to be made for the novice programmer. In Go, you just say `go myfunction()` and you're done. There's a lot of value in that.
Note that Celery isn't just about what you describe here. A distributed task queue is beneficial in Go (or node.js) too. Async is one thing, distribution is another. Also web servers are often volatile environments (e.g. for tasks that must complete).
Until, that is, you tackle a problem for which one machine isn't big enough, at which time you need to do all of the "real" solutions anyway, and `go myfunction()` buys you...nothing.
There's way too much focus today on new languages that are designed to work on just one machine, and scale there – as if vertical scaling was the true problem we all face, when in fact, it's not. /sigh
He's talking about a web application doing a task asynchronously. There is no reason why it would be hard to scale Go web servers across multiple machines. Yes you could use a distributed broker system to do asynchronous tasks, and some languages platforms leave you no other choice, but there is no reason to think his use case actually requires anything more than running an asynchronous task.
It's not really that hard. I just latch on to a broker that I'm already using elsewhere in my stack (Redis). Celery makes it super simple to run a command in the background.