As somebody living in Europe, I think it's a perspective you can have only if you live mostly in an English speaking world.
Up to 2015, my laptop was plagued with Python software (and others) crashing because they didn't handle unicode properly. Reading from my "Vidéos" directory, entering my first name, dealing with the "février" month in date parsing...
For the USA, things just work most of the time. For us, it's a nightmare. It's even worse because most software is produced by English speaking people that have your attitude, and are completely oblivious about the crashing bugs they introduce on a daily basis.
In Asia it's even more terrible.
And I've heard the people saying you can perfectly code unicode aware software in Python 2. Yes, you can, just like you can code memory safe code in C.
In fact, just teaching Python 2 is painful in Europe. The student write their name in a comment ? Crashes if you forget the encoding header. Using raw_input() with unicode to ask a question? Crashes. Reading bytes from an image file ? Get back a string object. Got garbage bytes in string object ? Silently concatenate with a valid string and produce garbage.
> As somebody living in Europe, I think it's a perspective you can have only if you live mostly in an English speaking world.
I live in Europe and I (mostly) agree that (most) code shouldn't (usually) contain any codepoint greater than 127.
It's a simple matter of reducing the surface area of possible problems. Code needs to work across machines and platforms, and it's basically guaranteed that someone, somewhere is going to screw up the encoding. I know it shouldn't happen, but it will happen, and ASCII mostly works around that problem.
Another issue is readability. I know ASCII by heart, but I can't memorize Unicode. If identifiers were allowed to contain random characters, it would make my job harder for basically no reason.
Furthermore, the entire ASCII charset (or at least the subset of ASCII that's relevant for source code) can easily be typed from a keyboard. Almost everyone I know uses the US layout for programming because European ones frankly just suck, and that means typing accents, diacritics, emoji or what have you is harder, for no real discernible reason.
String handling is a different issue, and I agree that native functions for a modern language should be able to handle utf8 without crashing. The above only applies to the literal content of source file.
I understand to not want utf8 for the identifiers, as you type them often and want the lowest common keyboard layout demonitor, but comments and hardcoded strings, certainly not.
Otherwise, chinese coders can't put their name in comments? Or do they need to invent an ascii names for themself, like we forced them to do in the past for immigration ?
And no hard coded value for any string in scripts then? I mean, names, cities, user messages and label, all in in config files even for the smallest quick and dirty script because you can't type "Joyeux Noël" in a string in your file? Can't hardcode my "Téléchargements" folder when in doing some batch work in my shell?
Do I have write all my monetary symbols with escape sequences? Having a table of constant filled with EURO_SIGN=b'\xe2\x82\xac' instead of using "€"? Same for emoji?
I disagree, and I'm certainly glad that not only Python3 allows this, but do it in a way that is actually stable and safe to do. I rarely have encoding issues nowadays, and when I do, it's always because something outside is doing something fishy.
If we're still talking about Python 3, you can also write,
EURO_SIGN = '\N{EURO SIGN}'
Which is ASCII, and very clear about which character the escape is. I wish more languages had \N. That said, I'm also fine with a coder choosing to write a literal "€" — that's just as clear, to me.
While I don't believe the \N is backwards compatible to Python 2, explicitly declaring the file encoding is, and you can have a UTF-8 encoded source file w/ Python 2 in that way.
I'd also note the box drawing characters are extremely useful for diagrams in comments.
(And… it's 2021. If someone's tooling can't handle UTF-8, it's time for them to get with the program, not for us to waste time catering to them. There's enough real work to be done as it is…)
You can do Unicode in Python 2. You can do Unicode faster and easier in Python 3. But by gaining that ability, they set the existing community back by 10 years.
Up to 2015, my laptop was plagued with Python software (and others) crashing because they didn't handle unicode properly. Reading from my "Vidéos" directory, entering my first name, dealing with the "février" month in date parsing...
For the USA, things just work most of the time. For us, it's a nightmare. It's even worse because most software is produced by English speaking people that have your attitude, and are completely oblivious about the crashing bugs they introduce on a daily basis.
In Asia it's even more terrible.
And I've heard the people saying you can perfectly code unicode aware software in Python 2. Yes, you can, just like you can code memory safe code in C.
In fact, just teaching Python 2 is painful in Europe. The student write their name in a comment ? Crashes if you forget the encoding header. Using raw_input() with unicode to ask a question? Crashes. Reading bytes from an image file ? Get back a string object. Got garbage bytes in string object ? Silently concatenate with a valid string and produce garbage.