Hacker News new | past | comments | ask | show | jobs | submit login

A good summary, but for one imortant detail: In UTF-16, some code points (laying on the so-called "astral planes", ie not on the "basic multilingual plane") take 32 bits.

The Emoji, for example, lie on the first higher plane: πŸ’πŸŽ„πŸ°πŸš΄. Firefox and Safari display them properly, Chrome doesn't, no idea for IE and Opera.

UCS-2 is a strict 16-bit encoding (a subset of UTF-16), and it cannot represent all characters.

It is the encoding used by JavaScript, which can be problematic when double width characters are used. For example, `"πŸ™πŸšπŸ›πŸœπŸπŸžπŸŸ".length` is 14 even though there are only seven characters, and you could slice that string in the middle of a character.




This is a bit pedantic, but JS implementations don't necessarily use UCS-2 as the internal encoding. The issue is that the spec requires characters to be exposed to programs as 16-bit values. See here: http://mathiasbynens.be/notes/javascript-encoding


From my undestanding, the Chrome doesn't display emoji because of poor unicode handling, but its a difference in the (font?) renderer used.

See http://apple.stackexchange.com/questions/41228/why-do-emoji-... and https://code.google.com/p/chromium/issues/detail?id=90177


It should be noted that at the time JavaScript was developed, there was only the basic multilingual plane - the extension was only introduced with Unicode 2.0 in 1996.


I wonder how hard it'd be to get JavaScript/ECMAScript onto a better encoding.. Do we actually have a "better" encoding?


Depends what you mean by "better". UTF-8 generally ends up using fewer bytes to represent the same string than UTF-16, unless you're using certain characters a lot (e.g. for asian languages), so it's a candidate, but it's not like you could just flip a switch and make all javascript use UTF-8.


I think the size issue is a red herring. UTF-8 wins some, UTF-16 wins others, but either encoding is acceptable. There is no clear winner here so we should look at other properties.

UTF-8 is more reliable, because mishandling variable-length characters is more obvious. In UTF-16 it's easy to write something that works with the BMP and call it good enough. Even worse, you may not even know it fails above the BMP, because those characters are so rare you might never test with them. But in UTF-8, if you screw up multi-byte characters, any non-ASCII character will trigger the bug, and you will fix your code more quickly.

Also, UTF-8 does not suffer from endianness issues like UTF-16 does. Few people use the BOM and no one likes it. And most importantly, UTF-8 is compatible with ASCII.


There is absolutely no situation in which UTF-16 wins over UTF-8, because of the surrogate pairs required. That makes both encodings variable length.

UTF-32 is probably what you're thinking of.


I know that both encodings are variable-length. That is the issue I am trying to address.

My point is that in UTF-16 it's too easy to ignore surrogate pairs. Lots of UTF-16 software fails to handle variable-length characters because they are so rare. But in UTF-8 you can't ignore multi-byte characters without obvious bugs. These bugs are noticed and fixed more quickly than UTF-16 surrogate pair bugs. This makes UTF-8 more reliable.

I am not sure why you think I am advocating UTF-16. I said almost nothing good about it.


Bugs in UTF-8 handling of multibyte sequences need not be obvious. Google "CAPEC-80."

UTF-16 has an advantage in that there's fewer failure modes, and fewer ways for a string to be invalid.

edit: As for surrogate pairs, this is an issue, but I think it's overstated. A naΓ―ve program may accidentally split a UTF-16 surrogate pair, but that same program is just as liable to accidentally split a decomposed character sequence in UTF-8. You have to deal with those issues regardless of encoding.


> A naΓ―ve program may accidentally split a UTF-16 surrogate pair, but that same program is just as liable to accidentally split a decomposed character sequence in UTF-8. You have to deal with those issues regardless of encoding.

The point is that using UTF-8 makes these issues more obvious. Most programmers these days think to test with non-ascii characters. Fewer think to test with astral characters.


Anything in the range U+0800 to U+FFFF takes three bytes per character in UTF-8 and two in UTF-16 (http://en.wikipedia.org/wiki/Comparison_of_Unicode_encodings...:

"Therefore if there are more characters in the range U+0000 to U+007F than there are in the range U+0800 to U+FFFF then UTF-8 is more efficient, while if there are fewer then UTF-16 is more efficient. "

That same page also states: "A surprising result is that real-world documents written in languages that use characters only in the high range are still often shorter in UTF-8, due to the extensive use of spaces, digits, newlines, html markup, and embedded English words", but I think the "citation needed]" is added rightfully there (it may be close in many texts, though)


UTF-8 is variable length in that it can be anywhere from 1 to 4 bytes, while UTF-16 can either be 2 or 4. That makes a UTF-16 decoder/encoder half as complex as a UTF-8 one.


Surrogate pairs are way more complex than anything in UTF-8.


> Even worse, you may not even know it fails above the BMP, because those characters are so rare you might never test with them.

I don't think this is too relevant because anyone who claims to know UTF-16 should know about the surrogates. And if you are handling mostly Asian text (which is where UTF-16 is more likely to be chosen), then those high characters become a lot more common.


UTF-8 has its own unique issues, like non-shortest forms and invalid code units, that you are even less likely to encounter in the wild. Bugs in handling of these have enabled security exploits in the past.


🚴 isnt displayed correctly in my Firefox, though all the other characters you mention are.

FF 26 on Win 7.


Emojis are even more fun than that. Some of them take two unicode characters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: