> Good thing Danish doesn't have A through F, or you wouldn't be able to stomach hexadecimal.]
Hexadecimal requires being able to identify 'A', 'B' etc. as 'A', 'B' etc. It happens that they're assigned a different value/function then for their alphabetic use. The point is that there isn't another numerical symbol that is easily confusable with 'A'. (Of course there's l33t '4', but the two symbols are pretty distinct in most typefaces.)
tldr; Being able to be used to represent a numeric value isn't the same thing as being easily confused for a different symbol, with a different value that also represents a numeric value.
Of course, but I'm assuming anyone using hexadecimal understands what the glyphs represent when they're used as part of a hexadecimal number representation.
But the issue under discussion is being able to just do basic discrimination between characters – a necessary first step before associating them with their relevant value (e.g. as the tenth element of the sequence of natural numbers).
So the issue with say, ∅ vs Ø,ø (where the first is a 'slashed zero' and the second two are the Scandinavian letter glyphs, is not (in)ability to assign the correct value in the given context (e.g. alphabetic vs hexadecimal) but in being able to discriminate which symbol it is to begin with.
Now you're talking about Unicode issues, basically.
"zero" and "oh" are in fact the same symbol, just like A is the first letter of the Roman alphabet, and the number ten in hex, and just like a slashed circle is the empty set, the Scandinavian letter, or the slashed zero in computing.
In Unicode, we have multiple versions of some symbols which are dedicated those specific uses; there is a dedicated empty set, dedicated Scandinavian Ø and so on. ASCII already splits the O symbol into a dedicated 0 digit and O letter. Those being different codes allows font designers to play games with styling them differently.
OP doesn't like the fact that in some fonts, the slashed computing zero has a stroke that extends past the boundaries, because then they experience an Ø (zero) vs Ø (letter) crisis. I say, it's exactly the same crisis as O (zero) vs O (letter); just suck it up like the rest of the world.
OP claims that it's "no longer a zero"; but according to whom? There is no standard that the slashed zero of computing must not have a slash which extends outside of the oval. the slashed zero is an ad hoc concoction in computing that fontographers style in their own way. The font I'm typing in now renders ASCII zero as having a dot in the middle. Maybe there is a script somewhere in the world where that looks like a letter; oops!
I agree there could be such a rule; whether a stroke touches or crosses another one is significant. These are totally different characters, after all: 由, 田.
In western writing, we don't fuss quite that much. For instance the horizontal stroke of A sometimes extends outside of the frame. The top may be squared off or round.
What do you mean? 0 'zero' and O 'oh' are not the same character, either historically (the characters have distinct origins - with the Roman letter O 'oh' coming from Greek omicron, itself coming from Phoenician ʿayin; and the numeric character 0 'zero' coming originally from a dot notation in India (so dotted-zeros, which you mention using and I also prefer, are nicely appropriate) which eventually 'widened' to a hollow shape, adopted by Arabs and transmitted by them to Europe.), nor in terms of their encoding. But this isn't at all an ASCII innovation: the symbols were historically distinct and in fact have completely different origins.
So 0 'zero' and O 'oh' are not the same symbol. A is really the same symbol, whether in alphabetic use or hexadecimal, but 'means' different things in different contexts. (Even the alphabetic 'meanings' of A/a are not very consistent in English, in terms of representing sounds, and are influenced by context, so that a in cat and a in father don't 'mean' the same thing, in terms of representing sounds.)
To be fair, OP probably isn't actually using the Scandinavian Ø in contexts where it would be confused with ∅ (slashed zero), though in theory I suppose some language might support it as part of variable names etc.
But certainly I get irritated with fonts in which I (capital 'i') and l (lowercase 'L') look identical, as they occur in similar contexts and so increase cognitive load.
[Postscript edit: I was thinking that there is no visually similar character to a dotted zero, but then I remembered what upper-case theta (θ) is Θ, which does look awfully like a dotted zero.]
Hexadecimal requires being able to identify 'A', 'B' etc. as 'A', 'B' etc. It happens that they're assigned a different value/function then for their alphabetic use. The point is that there isn't another numerical symbol that is easily confusable with 'A'. (Of course there's l33t '4', but the two symbols are pretty distinct in most typefaces.)
tldr; Being able to be used to represent a numeric value isn't the same thing as being easily confused for a different symbol, with a different value that also represents a numeric value.