Hacker News new | past | comments | ask | show | jobs | submit login

My assumption is that’s due to the use case benefits for it.

More concurrency is not always ideal, especially if you’re not in an environment that guarantees you won’t have negative impacts or runaways process (BEAM languages, etc).

Rails projects are typically so hardwired to use a background queue like Sidekiq that it becomes very natural to delegate most use cases to the queue.




> More concurrency is not always ideal

Is this due to increased memory usage? Does the same apply to Sidekiq if it was powered by Fibers?


Really depends on the job. But generally, yes the same applies to Sidekiq. I think there is a queue for Ruby called Sneakers that uses Fibers?

If you're making API calls out to external systems, you can use all of the concurrency that you want because the outside systems are doing all of the work.

If you're making queries to your database, depending on the efficiency of the query you could stress your database without any real benefit to improve the overall response time.

If you're doing memory intensive work on the same system then it can create a potential issue for the server and garbage collection.

If you're doing CPU intensive work, you risk starving other concurrent processes from using the CPU (infinite loops, etc).

Something like the BEAM is setup for this. Each process has it's own HEAP that's immediately reclaimed when the process ends without a global garbage collector. The BEAM scheduler actively intervenes switch which process is executing after a certain amount of CPU time, meaning that an infinite loop or other intensive process wouldn't negatively impact anything else...only itself. It's one of the reasons it typically doesn't perform as well in a straight line benchmark too.

Even on the BEAM you still have to be cautious of stressing the DB, but really you have to worry about that no matter what type of system you're on.


But wouldn't using a connection pool solve this problem of "stressing out the database"? I assumed a single connection from the pool would be considered "occupied" until we hear back from the database.

Or are you saying that processing lots of requests/tasks in Rails while waiting for the database would quickly eat up all the CPU? It seems like a good thing - "resource utilization" = servers should do things whenever possible rather than just waiting. Although now that I think about it you'd only want maximum resource utilization if your database is on a separate server.


With the database it depends on specific queries. Ideally, you can hammer it and it will be fine.

If you have inefficient queries, N+1 problems, competing locks, full table scans, temp tables being generated, etc then more concurrency will amplify the problem. Thats all I meant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: