RDS has severe performance limitations as in you can't provision more than 30K IOPS which is about 1/2 the performance of low end consumer SSD and about 1/20 the performance of a decent PCI-E SSD. You way better of running on decent dedicated hardware for the DB.
You can get 500K random reads per second and 100K random writes per second using RDS Aurora.
If you truly need more than 30K IOPs, I would recommend leveraging read-replicas, a Redis cache, and other solutions before just "throwing money at the problem" and purchasing a million IOPs.
You can't just buy a single enterprise-grade NVMe SSD and call it a day. Are you planning on buying enough to populate at least 2-3 servers with multiple devices, then setting up some type of synchronous replication across them? What type of software layer are you going to use to provide high availability for your data? DRBD? How are you going to manage all of the different failure modes (failed SSD, network partition, split brain, etc.)? How are you going to test it?
I'm afraid you are seriously underestimating the operational capabilities required to successfully operate a highly-available, distributed, SSD storage layer.
Nope but if I pay a few K for a service I expect it to scale to perfomance at least comparable to a very low end device. Why would I use DRBD in a Postgres cluster? I am not underestimating anything I am simply pointing out that RDS is overpriced and crappy service. A proper setup & operation of a postgres cluster is manageable task. What you totally can not manage on AWS is the risks of having single tenant that is using 30% of resources or having lengthy multi AZ outages due to bugs in extremely complex control layer etc.