ASCII doesn't make the U.S. special. ASCII is special because it's from the U.S.
Lots of people speak languages that trivially fit in 8 bits with no real "figuring out" to do. Before Unicode, we all had our different codepages or encodings. Including the U.S.
The U.S. is pretty central to computing. Because of that, and because ASCII only uses 7 bits, some other 8-bit cultures use it as a subset for their native 8-bit encodings. Even in the U.S, we use extensions to ASCII so we can represent text in languages that are close cousins to English. I doubt you actually use ASCII much. You've probably been using either ISO 8859-1 (aka Latin-1), which is a superset of ASCII, or Windows-1252, which is a superset of Latin-1.
This mess of incompatible codepages and culture specific encodings is one of the main problems that Unicode was invented to solve. It also happens to help languages which need more than 8 bits.
Many languages fit into 8 bits, but English is particularly simple in its alphabet. Even many of the European languages that can fit in 8 bits have things like accented characters that complicates things somewhat.
Of course this isn't to say English is simple overall. Just that it's complexities lie elsewhere, and it's simplicities lie in an area that made it particularly simple for early computer systems to process.
> Even many of the European languages that can fit in 8 bits have things like accented characters that complicates things somewhat.
I don't see your point here, with respect to English orthography making computer implementation easier. How exactly does not needing representations for accented characters make anything easier?
If it was just some additional characters like ñ (which is considered a letter of its own, not an accented n) then it wouldn't be a big deal – but e and é are the same letter with different accents, which adds some subtlety that English simply doesn't have. Given a small enough number of accented characters you can punt on that, call them each a character, but English is objectively simpler since the only real distinction it has between letters is caps or not-caps. (I was just watching the Mother Of All Demos, though, and everything was in caps but they put an overline over capital letters. So even normal English lettering was too complicated for a while.)
It has fewer characters (don't need one for each accent, possibly exceeding 8 bits otherwise) and/or no variable width characters. Also capitalization rules are trivial.
Not that I'm claiming English is unique here, just convenient, and many languages can't claim that.
Lots of people speak languages that trivially fit in 8 bits with no real "figuring out" to do. Before Unicode, we all had our different codepages or encodings. Including the U.S.
The U.S. is pretty central to computing. Because of that, and because ASCII only uses 7 bits, some other 8-bit cultures use it as a subset for their native 8-bit encodings. Even in the U.S, we use extensions to ASCII so we can represent text in languages that are close cousins to English. I doubt you actually use ASCII much. You've probably been using either ISO 8859-1 (aka Latin-1), which is a superset of ASCII, or Windows-1252, which is a superset of Latin-1.
http://msdn.microsoft.com/en-us/library/cc194884.aspx
This mess of incompatible codepages and culture specific encodings is one of the main problems that Unicode was invented to solve. It also happens to help languages which need more than 8 bits.