Hacker News new | past | comments | ask | show | jobs | submit login

The author of Joda-Time actually thinks that even Joda-Time didn't get it quite right, and believes the java.time libraries in Java 8 and above (aka JSR-310[1]) are better than Joda-Time: https://blog.joda.org/2009/11/why-jsr-310-isn-joda-time_4941...

It turns out that abstractions for time are really hard to get right.

[1] https://jcp.org/en/jsr/detail?id=310




Props to him for not having an ego with that. Jodatime even recommends using JSR-310 time for new development.


The author of Joda was the primary person responsible for the new Java Date and Time API (JSR-310).


Date & Time need to be baked into the operating system so it only has to be gotten right once, and then every programming system benefits.


So long as they actually get it "right". Compare to Windows' APIs originally taking UCS-2, then UTF-16, when now we would all rather be using UTF-8.


Note that UTF-8 hadn't actually been invented at the time UCS-2 was implemented in Windows: https://unascribed.com/b/2019-08-02-the-tragedy-of-ucs2.html


In fairness to Windows and Java, they weren't wrong. There was no UTF-16, rather, UCS-2 was the accepted standard because the plan for Unicode was to encompass languages in use, not emojis and historical langauges. That changed and we're stuck with that legacy.


Even without emoji, mashing up Chinese, Japanese, and Korean to fit in 21k was never going to happen. It’s sort of like asking Danes to stop spelling their names correctly because we can't afford the extra codepoint for “å”.

https://en.wikipedia.org/wiki/Han_unification#Rationale_and_...


What I don't understand is how they miscounted so badly. Even ignoring Han unification, Chinese by itself uses more than 65k.

Or was there an intent to not encode some of these rarer characters? I haven't been able to find any info.


They didn't think they knew enough about CJKV to make that call, and so instead asked pre-eminent scholars from major Chinese, Japanese and Korean universities, and they replied that 21k was going to be enough.

The reason they thought that they could fit Chinese into so few letters was that at the time, the CPC supported academics who wanted to reform Chinese towards fewer letters. Not long after, views about traditional scholarship changed, and now they want to promote maintaining more of their traditional characters.

The reason they thought Han unification would work was that the time they asked was in a short period of rapprochement in Sino-Japanese relations. At the time, it was good politics for Chinese scholars to co-operate with Japanese ones. Very soon after this changed.


The current policy seems to be to add characters to Unicode even if nobody uses them any more. Is that a complete flip from what it used to be? Because I feel like that's the only way character reform would have mattered.


Yes. The current policy is made possibly by the adoption of UCS-2 and UTF-8, which expanded the amount of representable characters from 64k to >1M, of which only 10% are in use.


That solves the problem for one computer, but I can't see how it would solve it for networks and databases, given a computing environment that will continue to evolve and foil compatibility.

But a time protocol that recieves the wide adoption of TCP/IP might suffice.


How much does that actually help? Some things are much easier when system calls return sane values, but the standard library of a programming language needs to work as best it can on many platforms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: