If you are I/O bound then asyncio or good old fashioned gevent will do great.
If you are CPU bound, then use multiprocessing.
If you need to accept "jobs" from elsewhere, use RabbitMQ (with or without Celery).
If you have mixed CPU/IO workload that fits the worker pattern, then you would do all 3. At the top level you have a RabbitMQ consumer, fetching jobs from a remote queue and then putting these into a multiprocessing queue processed by N=~cpucount processes. And each of these use asyncio/gevent to do their work.
I saw that at one company having RabbitMQ / Celery setup - every time a new software engineer comes in, they complain about RabbitMQ and ask why would company use it. The infrastructure was running like this without hiccups for years. At one point company has let go of many experienced engineers and this time one developer found some issue with the code and blamed it on rabbit as it was locking the queue. There were no more senior developers to contest it, so he convinced manager to swap it out for Redis. He took about two months to rewrite it. Surprise, the same issue existed on Redis. The Redis solution works fine, but has its own limitations...
So do you recommend a single Python process running asyncio or multiprocessing or both? Or do people normally split these things amongst several Python processes?
My understanding is that multiprocessing creates multiple interpreters but that it still comes across some GIL issues if all under the same Python process.
I am in general quite comfortable with the actor model, and I would ideally use Erlang/Elixir here, but I can't for various reasons.