Careful with where you read "best practice" from. Some folks like to brag about their new high-scalability architecture, but it's way over-engineered for most software.
The pattern I described of using SQLite tables to mimic a message queue will be very easy to switch to PostgreSQL or MySQL. If you hit serious scale you can switch to something like Kafka or Kinesis. That'll be a bit more involved, but hopefully your revenue and team scaled to match the compute needs. When you switch to Kafka, you can use Spark or Storm instead of cron. If you switch to Kinesis, you might choose AWS Lambda. So you're not writing yourself into a corner.
I've always wondered about this: doesn't the OS read buffer pretty much keep the whole thing in memory anyway, if you have memory to spare? Does it make that much of a difference?
The pattern I described of using SQLite tables to mimic a message queue will be very easy to switch to PostgreSQL or MySQL. If you hit serious scale you can switch to something like Kafka or Kinesis. That'll be a bit more involved, but hopefully your revenue and team scaled to match the compute needs. When you switch to Kafka, you can use Spark or Storm instead of cron. If you switch to Kinesis, you might choose AWS Lambda. So you're not writing yourself into a corner.