In fairness to Windows and Java, they weren't wrong. There was no UTF-16, rather, UCS-2 was the accepted standard because the plan for Unicode was to encompass languages in use, not emojis and historical langauges. That changed and we're stuck with that legacy.
Even without emoji, mashing up Chinese, Japanese, and Korean to fit in 21k was never going to happen. It’s sort of like asking Danes to stop spelling their names correctly because we can't afford the extra codepoint for “å”.
They didn't think they knew enough about CJKV to make that call, and so instead asked pre-eminent scholars from major Chinese, Japanese and Korean universities, and they replied that 21k was going to be enough.
The reason they thought that they could fit Chinese into so few letters was that at the time, the CPC supported academics who wanted to reform Chinese towards fewer letters. Not long after, views about traditional scholarship changed, and now they want to promote maintaining more of their traditional characters.
The reason they thought Han unification would work was that the time they asked was in a short period of rapprochement in Sino-Japanese relations. At the time, it was good politics for Chinese scholars to co-operate with Japanese ones. Very soon after this changed.
The current policy seems to be to add characters to Unicode even if nobody uses them any more. Is that a complete flip from what it used to be? Because I feel like that's the only way character reform would have mattered.
Yes. The current policy is made possibly by the adoption of UCS-2 and UTF-8, which expanded the amount of representable characters from 64k to >1M, of which only 10% are in use.
That solves the problem for one computer, but I can't see how it would solve it for networks and databases, given a computing environment that will continue to evolve and foil compatibility.
But a time protocol that recieves the wide adoption of TCP/IP might suffice.
How much does that actually help? Some things are much easier when system calls return sane values, but the standard library of a programming language needs to work as best it can on many platforms.
It turns out that abstractions for time are really hard to get right.
[1] https://jcp.org/en/jsr/detail?id=310