No it is not a hard problem, it is an impossible, unsolvable problem. Software can not alter its behavior to match your desires if you do not tell it what you desire. Clustering is not some simple monolithic thing where you just "enable clustering" and that's that. There's billions of possible clustering setups. Setting it up is as simple as it can get, you have to pick which of the billions of setups you want.
>> Software can not alter its behavior to match your desires if you do not tell it what you desire
Why not have special types of transaction that would explicitly define consistency expectations across a cluster. Make the normal default mirror all data across all clusters and require a lock across the entire cluster when inserting/updating/deleting. You would then be free to alter expectations and introduce sharding to improve performance as needed.
Yes, there are billions of different setups, but there are a few basic ones, and you could start by solving one or two, before solving the generic case. For example, one could provide just a cluster setup for data replication. No fancy distributed data models, just copy the data around in the cluster, if a new node joins copy that data to it, if something fails, handle the failure. Postgresql claims to have multi-master setups, so this is really about the node handling and copying the data around.
After that, you could introduce locally distributed cluster. Later on you could introduce geographically distributed cluster setups. But just because the later is very complex, does not mean that you can't start with the basic setup.