Being very familiar with Asm and how small programs can be, and moderately familiar with TLS and some crypto, I'm reasonably confident that 64k of code and 64k of data is sufficient for a client TLS 1.2+1.3 implementation with enough ciphersuites to access 99% of all websites. Each connection will need at least a 16k input record buffer along with the cipher states for both directions, although you don't need to always send full-sized records so the output buffer can be smaller. I haven't looked into the specs in detail to see if a smaller input buffer would be possible even for 16k records, if you're willing to sacrifice some security in the form of delayed record integrity checking (i.e you can stream the data of a record through even without having read the whole thing, but if the record has been modified in transit, you'll only get the error once you've already consumed enough corrupted data to reach the end of the record and thus finish the authenticity comparison.)
EdDSA as used in TLS 1.3 imposes a requirement that possibly could be problematic for approaches which try to avoid large buffers:
> A bigger issue with the use of Ed25519 or Ed448 in X.509 certificates is that these signature algorithms use an extra protection against collision attacks on hash functions: the initial hashing is done not on the signed data alone, but on the concatenation of the encoding of the public key and the signed data. This is called “PureEdDSA” in RFC 8032 terminology, and it makes the signature function more resilient against collision attacks, if the hash function turns out to be flaky in that respect (like MD5 or SHA-1). However, it means that the public key must be known before starting to process the signed data. In an X.509 certificate path provided on a stream in usual SSL/TLS order (end-entity first, then CA), the public key is made available only after the whole signed certificate has been received. Verifying a certificate path that involves use of EdDSA keys by CA thus requires buffering a complete certificate in RAM, something which has so far been carefully avoided by BearSSL.
Note that the end-entity certificate for google.com (albeit not using EdDSA) is nearly 4K, largely because the huge list of Subject Alternative Names, and I imagine certificates could easily be larger in many other cases. You need to buffer the entirety of each certificate in the chain plus the initial part of the issuer's certificate, up to the subject public key (that is, the public key of the issuer of the previous certificate).
This assumes a certificate chain is received in the optimal order. I think TLS 1.3 actually dispenses with the requirement that a chain be delivered in any specific order, in which case in the worst case you may be stuck having to buffer almost the entire chain.
I can't remember if it actually ended up in TLS 1.3 but yes, the "ordered list" is clearly just a bunch of maybe-useful hints about why you might trust these keys. The end-entity certificate is useful for any general purpose client, and then intermediates make sense if you didn't know them (e.g. Firefox actually knows them all) and from there it's basically judgements about what common clients might not know.
You could still stream this, at a cost of a some wasted network bandwidth, which may be more affordable than e.g. "Add RAM to a device which has maximum RAM already". Perform setup, learn the key you need, disconnect, perform the setup again, this time you know key K, assume key K, and when you learn the key if it's not actually K, begin fresh, otherwise success.
That's pretty impressive. Might be nice to bundle it with a proxy using the same library that runs locally. I think people on the retro machines wouldn't mind having to use a proxy if it was running on the same machine.
Many people stuck with MS-DOS and Windows 3.1 because they booted and ran faster than the modern Windows. Plus they rely on software that is 16 bit and cannot be converted and cannot run on 64 Bit Windows.
JRR Martin is one that still uses MS-DOS with WordStar because they are stable and saves his files on a LAN Manager share. He cannot tolerate a blue screen of death or Windows Update reboot fouling up his work. I think he may be better off using Headless Linux and eMacs with the WordStar key bindings emulation https://ftp.gnu.org/old-gnu/Manuals/emacs/html_node/emacs_46...
I wonder if it is possible to make https://github.com/wqweto/VbAsyncSocket compile on Visual Basic 4, then you could target 16 bit. There is already a VB5 branch.
As a retro PC enthusiast I greatly appreciate this work! I noticed the y author also created the 16-bit native Wordle port, Windle, which lives in my retro software archive.
Lately I mess around on a Pentium III class machine which is more than enough for a somewhat recent version of Firefox to run, and many sites continue to load and work well enough. But I'd like to dive into older hardware next, like the 486 DX2 the author is using. Then perhaps I can test this library first hand on real silicon.
RSS, Discourse, Fediverse, Matrix... I think interacting with a lot of modern things from old computers becomes quite feasible when you're not relying on dynamic web-based UIs.
Being very familiar with Asm and how small programs can be, and moderately familiar with TLS and some crypto, I'm reasonably confident that 64k of code and 64k of data is sufficient for a client TLS 1.2+1.3 implementation with enough ciphersuites to access 99% of all websites. Each connection will need at least a 16k input record buffer along with the cipher states for both directions, although you don't need to always send full-sized records so the output buffer can be smaller. I haven't looked into the specs in detail to see if a smaller input buffer would be possible even for 16k records, if you're willing to sacrifice some security in the form of delayed record integrity checking (i.e you can stream the data of a record through even without having read the whole thing, but if the record has been modified in transit, you'll only get the error once you've already consumed enough corrupted data to reach the end of the record and thus finish the authenticity comparison.)