When going to two you need to handle split brain some way probably, otherwise you end up with an database state hard to merge, thus you better get three, so two can find consensus, or at least an external arbitration node, deciding on who is up. At that point you have lots of complexity ... while for HN being down for a bit isn't much of a (business) loss. For other sites that maths probably is different. (I assume they keep off-site backups and could recover from there fairly quickly)
I haven't run a ton of complicated DR architectures, but how complicated is the controller in just hot+cold?
E.g. some periodic replication + external down detector + a break-before make failover that brings up the cold, accepting any unreplicated state will be trashed and rendering the hot inactive until manual reactivation
Well, there you have to keep two systems maintained, plus keep Synchronisation/replication working. And you need to keep a system running which decides whether to fail over. This triples the work. At least.
A wise colleague recently explained to me that if you build HA things HA from the start, it's only a little more than 2x. If you try to make an _existing_ system HA, it's 3x at best. HN is not a paid service, they can be down for a few hours per year, no problem. We're not all going to walk away in disgust.