Even if you're not all-in on AWS, you can use Parameter Store for a trivial sum. The latency might be a bit higher, but in most cases it shouldn't matter since secrets usually have a nonzero TTL associated with them.
If you're not using AWS, the use of Parameter Store (or Credstash) becomes a turtles problem--because you need to provision AWS credentials. At that point, whatever you provisioned AWS credentials into might as well store your other secrets, too.
If you are using AWS, EC2 itself is a trusted third party that grants the appropriate permissions for your executing system (container, instance, whatever).
If you do nothing, you end up with secrets manually provisioned by logging into a machine and splatting out a configuration file. Which is a more secure solution than a turtles-problem parameter store.
I'm assuming readers are just starting out, where auto-scaling and bloggable container architectures make very little sense. Configuration management is a prerequisite for those. That said, having a CM in place also makes the please-invest-in-us container ecosystem much less attractive, given that you can easily fall back to singleton-container-on-an-instance patterns (which is a great one for deployment regardless of your approach).
To that end, yes, their servers are probably hand-rolled or provisioned with minimal scripting like Ansible. And so manual secret deployment is to be expected.
If somebody is attempting to autoscale without having this solved already, they are making mistakes.