A lot of the debate and discussion here seems to come from the fact that the example program demonstrates concurrency across requests (each concurrent request is being handled by a different worker), but no concurrency within each request: The code to serve each request is essentially one straight line of execution, which pauses while it waits for a DB query to return.
A more interesting example would be a request that requires multiple blocking operations (database queries, syscalls, etc.). You could do something like:
# Non-concurrent approach
def handle_request(request):
a = get_row_1()
b = get_row_2()
c = get_row_3()
return render_json(a, b, c)
# asyncio approach
async def handle_request(request):
a, b, c = await asyncio.gather(
get_row_1(),
get_row_2(),
get_row_3())
return render_json(a, b, c)
# Naive threading approach
def handle_request(request):
a_q = queue.SimpleQueue()
t1 = threading.Thread(target=get_row_1(a_q))
t1.start()
b_q = queue.SimpleQueue()
t2 = threading.Thread(target=get_row_2(b_q))
t2.start()
c_q = queue.SimpleQueue()
t3 = threading.Thread(target=get_row_3(c_q))
t3.start()
t1.join()
t2.join()
t3.join()
return render_json(a_q.get(), b_q.get(), c_q.get())
# concurrent.futures with a ThreadPoolExecutor
def handle_request(request, thread_pool):
a = thread_pool.submit(get_row_1())
b = thread_pool.submit(get_row_2())
c = thread_pool.submit(get_row_3())
return render_json(a.result(), b.result(), c.result())
These examples demonstrate what people find appealing about asyncio, and would also tell you more about how choice of concurrency strategy affects response time for each request.
A more interesting example would be a request that requires multiple blocking operations (database queries, syscalls, etc.). You could do something like:
These examples demonstrate what people find appealing about asyncio, and would also tell you more about how choice of concurrency strategy affects response time for each request.