If you are using random nonces, segregating the nonce space to have a dedicated 32-bit block counter yields worse security bounds anyway. The whole point of that segregation is to avoid collisions when using a deterministic nonce like a counter. Unless your messages approach the size limit then randomizing the whole 128-bit nonce provides much better bounds than current GCM - effectively the same as randomized CTR mode. (ie nearer to 2^48 messages limit).
So yes, using GCM with a 128-bit random nonce is already good enough for most of these cases.
However IMO all of this is a distraction anyway. One of the most devastating real-world attacks involving nonce reuse was the KRACK attacks, and that involved a protocol error allowing the attacker to force nonce reuse. No amount of extra large random nonces would have saved from that. (And using random nonces in such a protocol significantly bloats the wire format).
What we really need to do is move away from hugely fragile polynomial MACs. For 90%+ of usecases a more robust PRF is perfectly performant enough - eg note that the impact of KRACK was less severe against CCM than GCM. Heck, even CTR/CBC+HMAC is perfectly fast enough for many use-cases. Stop with the premature optimisation already.
So yes, using GCM with a 128-bit random nonce is already good enough for most of these cases.
However IMO all of this is a distraction anyway. One of the most devastating real-world attacks involving nonce reuse was the KRACK attacks, and that involved a protocol error allowing the attacker to force nonce reuse. No amount of extra large random nonces would have saved from that. (And using random nonces in such a protocol significantly bloats the wire format).
What we really need to do is move away from hugely fragile polynomial MACs. For 90%+ of usecases a more robust PRF is perfectly performant enough - eg note that the impact of KRACK was less severe against CCM than GCM. Heck, even CTR/CBC+HMAC is perfectly fast enough for many use-cases. Stop with the premature optimisation already.