Depends on what you define as "file I/O", though. NTFS filenames are UTF-16 (or rather UCS2). As far as file contents, there isn't really a standard, but FWIW for a long time most Windows apps - Notepad being the canonical example when asked to save anything as "Unicode" would save it as UTF-16.
I'm talking about the default behavior of Microsoft's C runtime (MSVCRT.DLL) that everyone is/was using.
UTF-16 text files are rather rare, as is using Notepad's UTF-16 options. The only semi-common use I know of is *.reg files saved from regedit. One issue with UTF-16 is that it has two different serializations (BE and LE), and hence generally requires a BOM to disambiguate.
Then you're talking about the C stdlib, which, yeah, is meant to use the locale-specific encoding on any platform, so it's not really a Windows thing specifically. But even then someone could use the CRT but call wfopen() rather than fopen() etc - this was actually not uncommon for Windows software precisely because it let you handle Unicode without having to work with Win32 API directly.
Microsoft's implementation of fopen() also supports "ccs=..." to open Unicode text files in Unicode, and interestingly "ccs=UNICODE" will get you UTF-16LE, not UTF-8 (but you can do "ccs=UTF-8"). .NET also has this weird naming quirk where Encoding.Unicode is UTF-16, although there at least UTF-8 is the default for all text I/O classes like StreamReader if you don't specify the encoding. Still, many people didn't know better, and so some early .NET software would use UTF-16 for text I/O for no reason other than its developers believing that Encoding.Unicode is obviously what they are supposed to be using to "support Unicode", and so explicitly passing it everywhere.
You can import/export to Google Authenticator only and you must have two phones.
You cannot backup QR codes because screenshot is forbidden for security reason.
You cannot migrate to another application.
Hey thanks for taking the time to find the resource
I have used this before this works to get past the errors but doesn't actually solve the issue GraphQL tries to solve. This just hides it so you have to deal with it months later.
- The consumers don't know what the JSON looks like unless they test the query or get told what the query is explicitly. This means that the schema definitions don't capture the problem that the graphql try's to solve "describe your data that you want"
- also some of the other languages that aren't Javascript don't have GraphqlJsonScalar
I think that supporting dictionaries/maps/tables as apart of the Graphql language spec could of been possible as the key and value types are static. They are also iterable so it should be fairly straight forward for a consumer to deal with the data returned.
SDF(or mSDF) isn't the future. It's already "good enough" classic.
> This works, but performance tanks hard, as we solve every > bezier curve segment per pixel
This is "the future" or even present as used in Slug and DirectWrite with great performance
https://sluglibrary.com/ https://learn.microsoft.com/en-us/windows/win32/directwrite/...