I have always wondered a bit what is the purpose of expiration dates. For certificates or GPG keys alike. Once they expire it often enough creates some problems. Either because renewal has just been forgotten or because there are some technical issues, like in the Let's Encrypt / Android case.
If you have a security incident you can't wait for the expiration date anyway, you need to revoke. And hopefully users have a mechanism to note the revocation in time.
So expiration dates help to avoid using weak algorithms forever. But does that need to be done on a fixed date per certificate? Wouldn't that be a more gradual approach in the libraries? And of course they are not updated they get less secure as time goes by. And sooner or later compatibility issues will do the job. E. g. most Nokia Symbian phones don't do https any more, because all they can handle is SHA-1 certificate signatures. That was a gradual decline in functionality.
I agree with the sentiment that cert expiry is a frequent cause of outages. I think short lived certs and automated cert management are a better fix versus certs that never expire (for that issue).
If you don't have an expiry then you'd need to keep certificate revocations around forever, so your CRL grows without bound. Currently the certificate revocation list can remove expired certificates as clients won't use them. There's also lots of clients that don't check CRLs. Those clients would be forever vulnerable to a stolen certificate if we removed expiry.
Imagine if you inherited the infrastructure for a company where the previous admin created certs, lost track of them, and used questionable methods for securing them. I believe you'd struggle to find and revoke every previously issued certificate. The expiry ensures that gets cleaned up, eventually.
I don't have a good solution to the Nokia Symbian problem. I suspect if the device only supports SHA-1 then there are other unpatched security issues. Does anyone know if that's the case? If so, this seems more like a support and patching issue.
> I don't have a good solution to the Nokia Symbian problem.
I told that mostly as an anecdote. I don't think a solution is really expected. The devices I referred to (made 2005 - 2007) worked rather well for about 10 years. I guess that's a satisfactory achievement.
> There's also lots of clients that don't check CRL
This is in part due to the difficulty of doing so. On linux at least there is generally not a central location for CRLs, or anything to keep them up to date. Each application is responsible for maintinging CRLs it cares about itself. And then there is the fact that a full set of CRLs is pretty big. OCSP fixes some of that, at the cost of more latency during the TLS handshake. OCSP stapling is probably the best existing solution, but I don't think it is widespread enough that you can rely on it yet.
That is certainly an interesting project, I wish it success, and it may very well improve things in the future. However, it looks like currently it is only experimentally used in firefox and isn't really easy to use in other applications. And even if there was a readily available library for consuming crlite files, there is still the problem of keeping the local crlite filter up to date and if you want to use different trust anchors than firefox, you would need to have the infrastructure to generate your own crlite filters periodically.
OCSP stapling always felt a little hacky to me. It feels like just having super short lived certificates but with extra steps. Why not just have super short lived certificates instesd?
Things are moving in the other direction. Browser vendors have forced certificates used for browsers to a max life of 398 days.
There are multiple reasons, but you seem focused on the certificate management side. One advantage of doing things more frequently is they are forced to become more routine or automated. So shorter expirations should actually make expiration failures less likely over time. Of course the transition sucks.
Apple eventually clarified, and other root trust stores followed, by making longer-lived new certificates a policy violation rather than just not working on a Mac / iPhone.
That's a meaningful distinction. Certificates which lack SCTs (the proof that they were shown to Certificate Transparency logs before you) don't work in some popular browsers, but those certificates are not policy violations, they just don't work in browsers so you probably should not use them on a web site. In a handful of cases such certificates exist for legacy reasons (e.g. industrial environment that doesn't know anything about the "Web"), in other cases they're minted but not intended to be seen yet, for example Google's front end facing systems can do this.
When Google accidentally made some of those certificates live with insufficient SCTs they just did not work in Chrome - which is embarrassing but it was not a policy violation, the subsequent root cause analysis was Google's choice not mandated by the other root trust stores and there was no threat that anybody's trust would get revoked.
That lifetime is for domain certificates. CA certificates, which is what matters for the Let's Encrypt issue have much longer lifetimes, like 10 years.
In order to revoke a certificate, you first must at least suspect it has been compromised.
An expiration date can help limit the impact of compromises you don't suspect.
It's also important to see that the SSL/TLS system does not allow you to revoke a certificate with 100% certainty, the revocation mechanism is flawed - it's often practically impossible to reliably revoke a certificate after someone has compromised your private key. The only feasible workaround is early expiration to reduce the window of vulnerability.
* CRL is a list of revoked certificates, it must be downloaded and manually applied to all clients, which only occurs once in a while as a new update, and many hate system updates. The huge lists of revoked certificate have size, performance and scalability problems, currently they are considered obsolete and mainly replaced by OCSP.
* In OCSP, clients ask a CA's OCSP server to determine the validity of a certificate. It operates out-of-band from the main TLS connection, and networks and CA servers are not reliable, especially in the early days. Timed out OCSP requests were common. If you are a web browser, you don't want a failed OCSP request to block/DoS HTTPS, thus OCSP becomes essentially "optional". Browsers reject bad certificates when the connection goes through, but if it times out, nothing happens. An attacker can simply block the OCSP server to bypass the revocation check. There's also the problem of privacy - you have to send the name of every website you visit to third party servers.
* In OCSP Stapling, the webserver acts like a proxy - it hosts a cached copy of the OCSP response for itself (it cannot be forged because it's signed by the CA) and sends it to clients in-band, via normal TLS channel. Because now the client no longer asks CA's OCSP servers for a response, but instead simply asks the webserver itself, it eliminates the problem of third-party OCSP servers, solves the reliability and privacy issues. However, there's nothing to prevent an attacker to run a server with a compromised certificate with OCSP Stapling turned off.
* In OCSP Must-Staple, you can get a certificate from a CA that says "you must use OCSP Stapling, otherwise this certificate is null and void." Hence, if a browser sees a certificate with "OCSP Must-Staple" enabled, it must see whether the webserver supports OCSP Stapling, and via OCSP Stapling, the certificate's validity will be determined. If OCSP Stapling is not supported by the webserver, the connection is rejected. Thus, an attacker who uses a compromised certificate has to either disable OCSP Stapling and be rejected, or to enable OCSP Stapling and gets caught. Finally the problem of reliable revocation is solved. But this is an optional feature and is only used by a tiny percentage of sysadmins. Also, it's only supported by webservers and browsers, other TLS-based applications like VPN, FTPS, SMTP, IMAP, XMPP, IRC servers/clients, etc, usually don't support OCSP Stapling at all.
Thus, it's often practically impossible to reliably revoke a certificate in SSL/TLS after someone has compromised your private key. The only feasible workaround is early expiration to reduce the window of vulnerability, an unrevokable certificate that expires within 90 days is less dangerous than a zombie certificate that lasts two years.
No, I don't think so. According to RFC 6066, Section 8 "Certificate Status Request" (a.k.a OCSP Stapling)...
> In order to indicate their desire to receive certificate status information, clients MAY include an extension of type "status_request" in the (extended) client hello. Servers that receive a client hello containing the "status_request" extension MAY return a suitable certificate status response to the client along with their certificate.
And in a TLS handshake, the procedure is ClientHello, ServerHello, Server Certificate, CertificateRequest, Client Certificate. But "Certificate Status Request" is only specified in ClientHello, not CertificateRequest, so it's not possible for a server to request OCSP status from the client.
Hence, OCSP Stapling strictly a service provided to the clients, not vice versa - unfortunately, reliable revocation of client certificate is a still a problem. But I guess it's much easier to blacklist the offended client certificates on your own server, perhaps still problematic in a big organization, but certainly easier than blacklisting every leaked certificate on the Internet.
Disclamer: I don't have any experience on managing systems that use client verification. I did my best to RTMF and fact-check myself, but corrections are welcomed.
This is indeed as far as I got as well. Maybe we can make it if we made an ssl extension ourselves but I’m not versed enough in them. It also would require controlling all ssl libraries in use.
For leaves and everything else below a root, the expiration (notAfter in X.509) tells us when the issuer ceases to be responsible for the contained assertion being true.
For example take a 90 day Let's Encrypt certificate. When it was issued ISRG had recently (likely seconds earlier) verified that the subscriber apparently had control over all the names listed.
Suppose a month later one of the names has been sold to you. You're entitled to have that certificate revoked. Since you control the name, you can just do the Let's Encrypt proof-of-control dance and get it revoked, issue a new one instead, whatever you want.
They issue millions of certificates every day, so they don't want to remain indefinitely responsible for the ones from last July, from 2018, and so on. Ninety days later, having expired, ISRG has no opinion about whether the holder of the certificate still controls those names, and if you call that revocation API it will refuse, there isn't anything to revoke because it has expired.
Now, if the certificate you're shown is 15 seconds expired, is it so much more likely that it's really because they don't control this name any more than it was a minute earlier? Not really, but we must draw a line in the sand somewhere, or else a 10 year old certificate for "www.microsoft.com" is still valid right?
For the root the argument is a bit trickier, after all they could just issue themselves a new certificate with the same keys but a different expiry date - and some root CAs have done this in the past.
In this case (for roots) as we see with Android not every system actually enforces expiry. Out of the box OpenSSL doesn't for example I think. But the argument as to why expiry matters for roots goes like this: If we are actively managing the trust store, clearly a trusted CA which wants us to continue trusting them should send us a new or updated root in plenty of time. If they forgot we can hardly have confidence in their other processes. If they decided not to we certainly should cease to trust their root when it expires as this signifies they presumably do not intend to obey policy any more, for this root at least.
For example, if you have a Firefox that you just decide not to ever update, I believe it won't enforce expiry of roots. Newer Firefox builds will, like clockwork, remove expired roots entirely. But the old build will continue to trust roots that have notionally expired if you are still running it.
> If you have a security incident you can't wait for the expiration date anyway, you need to revoke. And hopefully users have a mechanism to note the revocation in time.
This is the key issue. For a long time there was no working revocation mechanism. With a combination of OCSP, OCSP stapling and the muststaple extension you can get there, but this isn't in widespread use yet. Revocation is a difficult problem.
You can remove expired certificates from the list of revoked certificates. Without expiration time that list could grow limitless making it impractical.
I know the concept is offensive to us who love optimization and efficiency, but would it actually be impractical, though? A root certificate is a couple of kilobytes. How much space would you need to store every single certificate in history for the next hundred years?
For root certs probably not too much space, but for all certs.... That depends on the number of certs issued, and more people and things are using TLS and such so it probably be more than just linear growth. 100 Years seems a bit extreme just even look back 20 years for the terms of hardware and such. Also think about how much common MD5 was then and SHA1, both of which are broken for a lot use cases.
Let's Encrypt alone has issued about 1.5 billion certificates in five years of operation; just storing those in DER form with no indexing or other metadata would occupy more than a terabyte of storage. (This is also not counting CT precertificates, which are duplicative of the final certificates in their meaningful content but contain different signature data.)
I can understand the purpose behind the expiration dates on the SSL certificates you get issued for your domain — domains change hands, things happen. But why do root certificates have to expire at all? Relying on the system to be updatable and requiring constant maintenance doesn't feel very sustainable, and generally causes all kinds of problems. Modern Android versions treating user-installed root CAs as second-class don't exactly instill confidence either.
I assume expiration protects against the case where a valid certificate is forgotten and a bad actor gets their hands on it and abuses it without the domain owner noticing. Similar to how some sites enforce session expiration.
Maybe because certificates are treated as an ephemeral resource that needs to be managed? If a business had "the certificate" for all its lifetime I imagine it would be easier to forget revocation when it's not needed anymore.
Yet consider if root certificates would expire in max 1 year. That by itself long time ago forced if not the phone vendors but at least app developers to deal with the need to update the root certificate DB. And the LetsEncrypt issue would not exist.
I had a co-worker who was pushing for certs that were only valid for a day, or for hours or even minutes. It would solve the whole problem of revocation.
When I've implemented JWT in the past (yes, I know, "don't use JWT"), I've opted for public/private signatures. The public cert I would publish on some URL for all consumers of the JWT to use. The rotation of certs there was a matter of hours. It would hold 3-4 of the last certs (depending on JWT lifetime).
What's wrong with using JWTs? What you have done should be industry standard and is enabled by the JWK spec. In fact several features of OpenID Connect are built upon signed JWTs from a well-known JWK store.
The problem I think is that JWT is just a small puzzle piece of a great artwork that is identity and auth. It is overly complex (not without reason) and people don't put in enough effort to learn the whole picture (and it just isn't thought in schools) and just want to use something quickly for their current problem. Which is why they pick JWT and don't understand the host of related technologies - OAuth2, OIDC, JWK, JWE - just to name a few.
I don't see a reason why I would ever want to use auth=none for anything as at that point JWT is just base64 encoded json but I may be too deep in the identity world by now.
You have to draw the line somewhere. Daily certs would probably increase the load for CAs by a lot. Also, energy consumption for certificate management would increase.
How compute intensive is a CA, anyway? I’d think that a few Raspberry Pi-ish devices would be enough to do it for an entire enterprise (or maybe even one, but fault tolerance).
In general I would assume when you shorten certificate validity from 90-ish days to one the computational resources required for issuance will increase to about 90x of what it was before. So it would be a lot more expensive to run.
Also, unrelated to that argument but I think the most expensive part in a CA is not issuance but rather OCSP since an OCSP query could happen with every TLS connection (although stapling exists to solve that). OCSP would not be affected by a shortening of validity periods.
Certificate revoking has a completely different level of reliability from the normal certificate management. There are theoretical flaws that can not be fixed.
I just always assumed it was like the rest of SSL:
It’s well-documented NSA saboteurs infiltrated the standards board. They forced through a bunch of bad proposals with the intention of making it overly complicated. The idea was to encourage misconfiguration and implementation bugs.
He’s referring to the leaks about NSA putting back doors into algorithms that Snowden leaked. Those algorithms were suspect from the beginning and avoided. It’s possible ones have gone undetected but that’s pure speculation without any kind of proof at this time. It’s also wholly irrelevant to this discussion and just pure FUD. Certificate expiration is needed to make certificate revocation perform well. Otherwise you need to keep the list of all certificates ever revoked whereas with expiration you can ignore checking expired certificates and more importantly revocation lists you download can prune certs that are otherwise expired anyway.
If anything, now that everything is connected to the internet you want shorter revocations (like days, weeks or months). That way the potential for abuse is shorter and the path for renewal is better trodden by organizations (ie less likely to forget about an expiring cert).
I wasn’t referring to the cryptosuite weakening. I was referring to unnecessary complexity in the SSL protocol itself, such as the whole certificate chain parsing mess, the countless opportunities to implement things vulnerable to downgrade attacks, and the overly-broad attack surface of the whole thing.
It made the rounds in the press years ago. Here’s a first person account for ipsec, which was one of the first hits I found when looking for information about the SSL weakening:
It’s describing the same tactics, but a different protocol. Honestly, just crack open the SSL spec. In hindsight, it’s pretty obvious it was intentionally over-complicated.
The Wireguard protocol attempts to fix these issues by hardcoding everything behind a protocol version number. It’s vastly easier to implement and configure properly.
"IdenTrust has agreed to issue a 3-year cross-sign for our ISRG Root X1 from their DST Root CA X3. The new cross-sign will be somewhat novel because it extends beyond the expiration of DST Root CA X3. This solution works because Android intentionally does not enforce the expiration dates of certificates used as trust anchors."
This "Android intentionally does not enforce the expiration dates of certificates used as trust anchors" - seems like another issue. And now LE is basically building features on an implementation flaw?
I agree with you conceptually, but it seems like everyone wants this and nobody doesn't want it, so as the lawyers sometimes say, "no harm, no foul".
In particular, the users of the Android devices want to continue to be able to access sites protected with Let's Encrypt, the site operators want to continue to allow this, Let's Encrypt wants to continue to allow this, and the auditors and root program operators decided that they don't consider it improper either.
You're right to describe it as an implementation flaw -- among other things, it removes the lever that root programs normally have to ensure continued compliance by CAs.
Expiration of certs is one thing, but expiration of cross-signatures in chains is another. If you trust the infra for root certs, it makes no sense to ever expire cross-signings as long as you have a signed timestamp to prove the signing happened before the (cross-)signing cert expired.
For example, Microsoft recognizes authenticode-signed binaries even if the signing certificate is long since expired if the signature includes an optional timestamp from an also recognized root cert (usually digicert or Symantec née Verisign) because you can verify the binary hasn’t since changed (because it includes the sha-x in the signature and the timestamp verifies that the signing took place before the certificate expired). It’s quite clever and provides a very good solution for running decades old apps from now defunct software houses that once took pains to sign their releases.
A certificate itself is such a “binary” that really is just testifying that the at x date, y party trusted z (unless a revocation was subsequently issued or the algorithm used is no longer considered secure).
Trusted certificates should be treated as a separate (system) package that can also be upgraded without the whole OS being upgraded. That's how they are treated on most of the Linux distros out there.
Android has opted instead for tightly coupling the certificates to the system itself. That's a very bad design decision that, either intentionally or unintentionally, makes a device useless 4 years after the latest system upgrade.
One more reason for either ditching Google's Android in favour of better supported and less abandonware-prone systems - Lineage is an excellent choice for those who don't want to give up the commodities of Android, but don't want to run the risk of throwing away their $1000 phone 4-5 years after the purchase just because Google decided not to push certificate updates to it anymore.
Android tablets are probably part of the problem. I still use my Sony Z3 Compact tablet. It is thin, waterproof, supports a SIM and SD card. I have never found a good replacement, though I'd like to. I mostly use it for reading ebooks and Pocket and RSS feeds so it handles the tasks well, but no upgrades, eventually it will need a new battery and now I see root certs are built in guaranteed obsolescence (like on the iPod touch). Unfortunately I can't find any high quality Android tablets as manufacturers have relegated Android tablets to the price sensitive consumer demographic. And iPad mini isn't waterproof.
It also works well with older Bluetooth BLE devices that won't connect with my newer Android phones.
Glad you've stated this because going forward, software generally gets deprecated more often than maintained (I'm sure there are a few rare cases). Our hardware is very intertwined with software, so we toss out perfectly good hardware because it isn't running the latest firmware. Can manufacturers allow late-stage open source rom conversions on smart devices and allow the community to keep supporting legacy systems?
Yup. My father has an old Galaxy Note 3 Neo (Dual Sim edition). That particular model never got any updates at all, so it was stuck on the release Android 4.3.
While it's old, it actually fulfills all his needs very well and he has little need for a newer phone... but he can't get new apps, because all of them require at least Android 5.0 (regular Note 3 / Note 3 Neo Single Sim got the update to 5, but not the dual sim). And the applications that are already on the device (Google stuff) actually no longer gets updates, and occasionally pops up "This version is too old! You have to update!" warnings that he can do nothing about..
I would be concerned that chaining to an expired root is going to cause some certificate verifyers to flag the chain as invalid. Similar to when the Sectigo/AddTrust root expired recently (but perhaps that was an issue because the cross-signed certificate expired).
If you support older systems, you really need to try to separate out clients to different hostnames (and sometimes different IPs, it they don't have good SNI support), so that you can provide each segment with the certificate chain it needs. Trust root issues are a lot harder than the sha1 deprecation, because at least there's a fairly well used TLS option exchanged for key support (it doesn't exactly mean support for keys in certificates, but it worked pretty well). There is a TLS option to provide trusted CA information, but it has zero mainstream use (and it doesn't seem terribly usable anyway, sending all your trusted root CAs is too much data, a digest/id would be much more reasonable).
Thankfully, browsers usually have forgiving verifiers, but if you support applications, libraries for certificate verification are all over the place; and if you support 3rd party clients good luck. x.509 is too big a spec to want to bundle it with an app, but you can bundle in trusted roots if you're targetting a sane platform.
The way I see it, there are three major consumer operating systems; four if you think iOS and OSX should be considered separately. Out of those three/four, only one tries to keep devices working and up to date as long as possible...
iOS/OSX allows central updating, but drops phones older than 5 years, and desktop/laptops from 5-7 years depending on the product series; users cannot maintain the OS themselves and are forced to abandon the device.
Android allows users to update the OS themselves against the wishes of the phone's OEM, but Google cannot force OEMs to continue supporting a phone within a reasonable timeframe (ie, no phone younger than 5 years should be dropped, yet is an unfortunately frequent occurrence with Android-hostile OEMs).
Microsoft tries to keep Windows running on PCs an absurdly long time, but users choose to fight Microsoft on this even when its in their best interest (ex: people still willingly run 7, even though 10 performs better, with less crashes, more performance, and less security bugs).
Alas, the dream of a Windows Phone is dead. Other than that, its most certainly an odd phrase in that article given what iOS does, since the user can still choose to go the third party Android ROM route if their OEM has truly abandoned them.
Could they do it without alienating their hardware partners? No.
Does Android require the Google Play store or any other Google services or non-AOSP code? No. Many Android devices ship entirely without Google stuff, including no Google Play store.
Could they do it even if they didn't care about alienating their hardware partners? Yes, if they also intend on being fined into the ground by the US and EU.
Their hardware partners, the OEMs, is what makes Android suck for Android users. It doesn't have to suck: OnePlus doesn't, and Google's own Pixel series isn't bad either. Avoid Samsung like the plague, LG and HTC also have questionable practices wrt ROM sanity and long term support.
Google's gotten away with extremely anticompetitive requirements so far. I can't imagine it would be worse for them to mandate 5 years of software support, something that's very positive for consumers.
KaiOS and Tizen also have a userbase wide enough to be considered major consumer OS's (even if primarily targeting markets such as India).
The Switch OS could probably also be considered major at this point with the number of devices out in the wild. Same goes for LGs WebOS. VxWorks is also fairly big, although at least Canon changed over to something custom for their cameras.
And that's just the consumer facing OSs. There's a lot of surprising ones, like Apples use of NetBSD for their Airport devices.
OS diversity is high outside common desktops, and desktops are a vanishingly small portion of consumer computers.
They can be updated by the creator, the updates don't have to be ran but they can say there is an update. With Android last I heard you had to wait for your manufactor to create the update, then for your telecoms provider to create an update and then you can see there is an update.
I don't think it's an issue on iOS, Apple is able and willing to update old devices if needs be. My 5S still receives the odd security update in iOS 12 from time to time (basically when something really big comes up).
If it would make sense, I'm pretty sure Apple could update the first iPhone still. And that would cover every iPhone 1 in existence. Android with its fragmentation makes that pretty much impossible.
I have an iOS 9 device (a 2011 iPad) at home which in theory has updates pending, but it's not able to install them. I guess I could try a factory reset, I will think about it in 2024. :)
"The new cross-sign will expire in early 2024, and hopefully versions of Android from 2016 and earlier will be dead by then."
Hard disagree. In this day and age, a device or OS that is merely 8 years old should be able to function! Is the issue limited to Lets Encrypt? If so its usage should be discouraged.
It's the interaction of certificate expiration with no OS updates.
The lack of OS updates happen to correlate with hardware age for Android phones.
That's also the reasonable place to apply pressure, to the people marketing devices with minimal future support, which means they quickly become insecure.
Just FYI, the issue is with certificates in general, nothing to do with LE specifically.
Dumbed down, the problem is that someone the old device never heard of, is not trustworthy.
Of course, an OS should still work after 8 years. But the problem is, that the OS has been abandoned (by the device manufacturer and possibly the community), so it is falling apart. Anything that accesses the internet needs regular maintenance to not become a security hazard.
I think it's a bit LE specific here though, since according to the article, real trust anchors do not expire on android, so it's only because LE is relatively new and was not a bundled CA root at the time, and had to be signed by a CA that was bundled (and this intermediate signature is what expires if I understand correctly)
> In this day and age, a device or OS that is merely 8 years old should be able to function!
There's no need to worry about this. If the scheme in the OP works, then the same trick can be used indefinitely until such time as the Android versions in question become completely marginal, be it 4, 40, or 400 years from now.
Encountered a similar issue with an app of mine that implemented certificate pinning. Still a large iOS 9 user base using a version of the app which contains a soon expiring list of pinned certificates. Unfortunately likely have to drop support for those users entirely.
Seems like there would be other serious problems with running an OS that hasn't been updated in four+ years. I'm not going to bother with a survey, but if memory serves, there have been at least a half dozen serious exploits revealed among different SSL libraries, bluetooth stacks, and WiFi.
Maybe letting the certificates expire would have actually helped to secure the IoT.
>Maybe letting the certificates expire would have actually helped to secure the IoT.
You would also have hundreds of millions of people who can't connect to most websites anymore:
>Let's Encrypt says it was added to Android's CA store in version 7.1.1 (released December 2016) and, according to Google's official stats, 33.8 percent of active Android users are on a version older than that.
Not everyone can afford to buy a new phone every 4-5 years. You would alienate a lot of people from smartphones, from websites that use Let's Encrypt, or maybe even websites that use HTTPS.
But if your device is just some thermostat behind a NAT firewall talking to one server, what’s the risk if it isn’t opening up ports or accepting unsolicited connections?
Let's be real here -- is this a problem with the device, or with site operators?
All the major players that promote TLSv1.2 -- Mozilla, Microsoft, Google -- serve their websites over TLSv1.0 just fine. Because they understand that they have to serve old devices to sell products and make money even from such outdated clients, which are still plentiful by the absolute numbers when it's billions of users that we're talking about.
In fact, most people aren't even aware of this, but www.google.com itself still works without HTTPS at all. This can easily be verified with curl. Looks like www.bing.com is the same in this aspect; because they gotta make the money from every user as well.
But what doesn't work? Wikipedia. Ironically, it's Wikipedia that intentionally engages in planned device obsolescence. Does the read-only access to an encyclopedia editable by anonymous users even require any TLS at all? Really?! And why is noone talking about this?
The worst part is that the overwhelming tech community is doing absolutely nothing about this injustice -- about planned device obsolescence by the likes of Wikipedia. Most of these obsoleted TLSv1.0 devices have very fast CPUs and gigabytes of RAM, and are still perfectly capable of browsing most of the internet, and non-technical people are genuinely unaware about these politics w.r.t. TLSv1.2. There's not a single advocacy group that I'm aware of that advocates for the rights of people who cannot use TLSv1.2. Mozilla SSL Configuration Generator is intentionally giving misleading TLS advice that Mozilla itself doesn't adhere to.
It’s really a shame. I had no reason to buy a new phone other than Google ditching support for the Pixel 2, a high priced flagship, after only 3 years. Combine this with how much more Android phones depreciate compared to iPhones (who wants a device you can’t get security updates for) and it’d have been cheaper to buy an iPhone. Needless to say, I didn’t buy another Android. I don’t think the update situation will improve further to be honest, even Google doesn’t care enough to lead by example.
>Today, your example eight-years-obsolete install base of Android starts with version 4.2, which occupies 0.8 percent of the market.
Instead of hoping that the 0.8% will shrink over the next 4 years, Let's Encrypt should understand that the 0.8% are the sane, reasonable people who realize that their Android devices still work fine for their intended purpose and do not have to be mindlessly upgraded because of mass-media pressure.
We should work instead to expand that 0.8% to avoid ripping out the heart of the planet for pointless capitalism.
This is one of the reasons why I objected to Let's Encrypt in the first place. Anyone who takes any measure to make older devices obsolete when they are still perfectly capable should be ashamed!
We are in the middle of a mass extinction; we have destroyed the oceans with plastic and runoff; we have created an environmental catastrophe. Mindless media-driven consumerism must stop.
The issue here is not due to Let's Encrypt, it's due to these devices, like so many others, not being designed to be maintained for the long term.
Solving this problem for 1/3 of Android users is going to help reduce the problem of electrical waste; even if they wear out Let's Encrypt will not be they reason why.
> The issue here is not due to Let's Encrypt, it's due to these devices, like so many others, not being designed to be maintained for the long term.
How so? Is www.google.com or www.bing.com doesn't work on those devices?
Why are you putting the blame solely on the devices, completely ignoring the fact that Let's Encrypt is an advocacy group whose sole intention is to make non-SSL-based HTTP obsolete? Keep in mind that HTTP has no reason to ever stop working on such old devices!
Noone's asking for indefinite support of old software. What we're dealing here is intentional device obsolescence, where the SSL protocol -- entirely optional if all you care about is reading text and watching pictures of cats from random anonymous sources -- is intentionally being used in such a way as to not be backwards compatible, by preemptively removing support for the earlier devices?
Completely agree on the hardware part but software should really receive security updates, especially when you're dealing with other people's data as is the case on a phone. Blame the phone vendors that don't have an adequate support plan for their devices. Or the carriers that sell unsustainable phones with their plans.
But Let's Encrypt is really not the one to blame here. They aren't responsible for outdated phones, they aren't responsible for expiring roots and they actually found a solution for these phones (as described in the article).
They're not perfectly capable, they are security hole ridden messes which haven't gotten a software update of any kind in years. That's not the fault of Let's Encrypt. Shooting the messenger is generally not a winning strategy.
You're really going to scrap your car from 2015 because the off-brand android doo-hicky they stuck in it hasn't been updated?
Even if you're technically inclined, it's not like installing some community provided roms image on your slightly out-of-date flagship phone. The device likely has some proprietary aspects to it which would render it useless even if you attempted.
The part didn't break. Someone who thinks they know better decided to make it not work, and complicated some normal person's life with some security theater.
If a manufacture intentionally made a product they sold you not work, much like how Tesla disables fast charging capriciously, it would be a violation of the Magnuson-Moss Warranty Act.
Exactly! It's so upsetting that the narrative has been so distorted that most folk don't even understand that it's not the manufacturer that intentionally makes the device unusable, but instead the website operators, and those who support and misleadingly advise such operators. The biggest problem is that folks like Mozilla tell everyone to disable TLSv1.0, whilst themselves still supporting it; which shows the biggest case of hypocrisy there could ever be.
Are you aware of any groups working against such planned device obsolescence? My latest gripe on this matter is Wikipedia -- it's beyond absurd that anonymous users can make changes to the contents of pretty much any of the millions of pages, yet getting said pages over pristine networks is conditional on TLSv1.2 support, limiting older devices from even read-only access to Wikipedia for absolutely no good reason.
Crypto keys and algorithms need to be updated or they stop working. Web browsers are full of exploitable bugs that get discovered over time. These are both provable statements. So any manufacturer that locks those in time is practicing planned obsolescence.
But I don't need crypto to read text and watch pictured posted by anonymous users, so, the crypto point is just moot.
You're also missing the context. Why does it matter if someone's browser has exploitable bugs if the only sites they visit are those that are not likely to use such exploits?
So, such planned device obsolescence is conditional on two explicit actions on site owner's side, (1), prohibiting http traffic, (2), prohibiting TLSv1.0 traffic. So, it's 100% site owner's actions that cause the device to become obsolete and make your own site inaccessible. The manufacturer has zero control over the actions of the individual site owners. On the other end of the spectrum, Google, Microsoft, Bing, Amazon, Mozilla, plenty of other businesses, don't intentionally go out of their way to disable both of those things, so, their sites still work -- including through HTTP in case of the search engines. Which manifests as a definitive proof that it's the fault of those other specific sites (like Wikipedia) that take explicit actions to make the older devices obsolete.
P.S. Incidentally, this also proves the point about capitalism -- as a result of misleading propaganda campaigns promoting HTTPS everywhere, most smaller sites are automatically and inadvertently acting as precursors for planned device obsolescence, whereas the big players that need to make the last cent out of every person in the world regardless of how old their device is, or what actions their provider takes against the encrypted traffic, are fully capable of getting exceptions to the PCI compliance or whatnot, and continuing to serve their sites through TLSv1.0 as well as plain old HTTP.
Even if all you want is plain HTTP, when HTTPS breaks it's mostly the fault of the device, and the product is still experiencing planned obsolescence caused by the manufacturer. And it would be stupid of a site to allow old versions of TLS, since that compromises the people depending on HTTPS; if there's going to be an insecure access method it should be HTTP.
And don't be so dismissive about privacy. Crypto isn't just for banking.
Also it's not really a capitalism thing, capitalism is too busy trying to sell you an update every 2 years to care about the difference between 8 years and forever.
> Even if all you want is plain HTTP, when HTTPS breaks it's mostly the fault of the device, and the product is still experiencing planned obsolescence caused by the manufacturer.
"Mostly"?! That's quite a stretch! You're attributing direct and explicit actions taken by a specific subset of site operators as caused by the device manufacturer, which it is clearly not!
> And it would be stupid of a site to allow old versions of TLS, since that compromises the people depending on HTTPS; if there's going to be an insecure access method it should be HTTP.
This argument doesn't stand -- if you're running the latest User-Agent software in December 2020, access to pre-TLSv1.2 sites is likely already disabled (or at least it was supposed to have been disabled earlier in 2020 -- did they back out of their own plan all over again?), so, how would the site allowing older versions of TLS at all allow the compromise that you describe to take place? It's simply not possible, because the User-Agent won't allow it!
To the contrary, if thousands of sites that don't actually need crypto wouldn't have been mistakenly made to use crypto since a few years ago, then we could have disabled pre-TLSv1.2 in newer browsers at a much faster rate; whilst still leaving TLSv1.0 support at the server level for the older clients that don't have the newer crypto.
So, ironically, the HTTPS lobby actually shot themselves in the foot by making everyone adopt TLS without any actual need.
> Crypto isn't just for banking.
Yes, sadly, crypto works great for planned device obsolescence, too!
> Also it's not really a capitalism thing, capitalism is too busy trying to sell you an update every 2 years to care about the difference between 8 years and forever.
The evidence appears to show otherwise. Capitalism -- Google, Bing, Amazon -- doesn't care if anyone still uses TLSv1.0; they'll still serve everyone to make a sale. Ironically, it's the non-profits "socialists" -- Wikipedia, Mozilla, EFF -- who (inadvertently?) promote planned device obsolescence by intentionally deprecating all backwards compatibility on the internet.
> "Mostly"?! That's quite a stretch! You're attributing direct and explicit actions taken by a specific subset of site operators as caused by the device manufacturer, which it is clearly not!
The sites disabled those methods because they were no longer secure.
We know that TLS implementations lose security over time.
Anyone locking in a specific implementation and specific certs knows it will stop being fully secure after a while, even in a world where sites try their absolute hardest to be compatible.
So yes, I mostly blame the manufacturer. Sites could allow older ciphers, but to have non-broken HTTP Secure requires the manufacturer to update things.
> if you're running the latest User-Agent software in December 2020, access to pre-TLSv1.2 sites is likely already disabled
It's not the worst plan in the world to wait for clients to forcibly disable old ciphers, but it means that even if all your site's visitors support a new version, they won't all be reliably using it.
Maybe now that browsers can enforce things better, and downgrade attack detection is better, it's safe enough to reenable older ciphers on some servers. But there were good reasons to disable them.
> actual need
All sites should have crypto. No sites "actually need" it if you're willing to work around it hard enough, but all sites should have it.
> The evidence appears to show otherwise. Capitalism -- Google, Bing, Amazon -- doesn't care if anyone still uses TLSv1.0; they'll still serve everyone to make a sale. Ironically, it's the non-profits "socialists" -- Wikipedia, Mozilla, EFF -- who (inadvertently?) promote planned device obsolescence by intentionally deprecating all backwards compatibility on the internet.
Oh, I thought you were saying capitalism causes obsolescence. But now I'm confused. When you said "this also proves the point about capitalism", what was "the point" being proven?
When you start with a premise that all sites MUST have HTTPS and MUST NOT support TLSv1.0 in each argument, then your arguments are simply unsound, because they're based on an incorrect premise, so, the conclusion couldn't possibly follow, because the underlying premise is false and thus cannot support any of your conclusions.
My point about capitalism is exactly that -- capitalism -- Google, Bing, even Amazon (i.e., companies that make the most money from the web) -- show that HTTPS is entirely optional (Google Search and Bing both work over HTTP just fine), and TLSv1.0 provided by the server is just as secure at TLSv1.2-only servers (Google, Microsoft, Amazon).
I can still use any device from the last 20+ years to access both Google Search and Bing. If you intentionally disable your blog from working on such older devices, shifting the blame to device manufacturer is simply ludicrous! All my sites are HTTP-only, so, anyone anywhere can access them, from any device, over any connection (some WiFi via satellite links only allow HTTP-only traffic for free -- I win again), and with any browser. They are not in any way "insecure", either, unlike what the newer browsers might tell you. I can reach as large a variety of visitors as Google and Bing if I simply don't listen to what Mozilla, EFF and Google itself tells me on how to run my website.
According to capitalism it's okay for banks to lose your money and it's your problem for having your identity stolen, go spend a dozen hours to get things fixed. And they won't use secure passwords on their site, and they'll use fake 2-factor, because those incidents don't bother them enough to want to prevent.
So when capitalism says a type of security isn't necessary, well, other than a nihilist "nothing is necessary" attitude, I don't believe them. And it doesn't prove that what a company does is "just as secure" as best practices.
> When you start with a premise that all sites MUST have HTTPS and MUST NOT support TLSv1.0 in each argument, then your arguments are simply unsound, because they're based on an incorrect premise
Whew, good thing I wasn't doing that.
> If you intentionally disable your blog from working on such older devices, shifting the blame to device manufacturer is simply ludicrous!
Let me try to be clear again, since you definitely misread me.
Disabling HTTP is on the site owner.
HTTPS breaking is the manufacturer's fault. The site can influence how it breaks, but no matter what a very old implementation will be broken. At a certain point you can't even get certificates any more because all the roots are expired.
> All my sites are HTTP-only
So you don't want your users to even be able to opt in to privacy or protection from hostile networks?
> Instead of hoping that the 0.8% will shrink over the next 4 years, Let's Encrypt should understand that the 0.8% are the sane, reasonable people who realize that their Android devices still work fine for their intended purpose and do not have to be mindlessly upgraded because of mass-media pressure.
It may also be just people who don't want to throw their "smart" fridge out after only 4 years...IoT obsolescence will make this worse in the coming years.
The main problem with old devices is the battery. I don't like changing my phone too much, so I use it until the battery only is good for a 10 minute call or a day without calls, or one of my kids drop the phone and it gets broken.
So I change it probably every three years, and sometimes my wife use the new phone and I use her old phone. (She use the phone more than me.)
One possibility is that the manufactures add more batteries, but it makes the phone more expensive and heavy, and it doesn't solve the accidents by the kids.
To a large extent this is mitigated by having removable batteries. From what I've heard the Nexus 10 perhaps is a notable exception where the available alternate batteries aren't very good; mine needs a new battery but I don't really use it enough to justify one plus it's a somewhat more involved process than just popping it out.
I'm still using a Note 3 from 2013, it's on its third battery (though the second probably could have been fine another year or more). I also recently upgraded it to Android 10 (LineageOS) -- the last supported Android OS was 5. It's a small pain to backup and restore, it's nicer to have a simple OS upgrade that preserves apps/data, but it's a viable option for many old devices. It's enough choice for me to decide between trying to limp along with outdated software (many random app crashes went away after upgrading), upgrading the OS yourself, and buying a new device.
Yeah, batteries can go for a surprisingly long time. Still, I'll never buy a phone without the ability to replace it. Apart from the benefits of replacing old batteries that only hold a fraction of their original charge, swappable batteries means I can take a spare with me if I need to, and I can increase the size. I've got a 7500 mAh chonker in my Note 3, slightly over 2x the stock battery, and it does many hours even watching video.
Not sure why author writes using negative language about the fact that Android cannot be remotely updated? To me that sounds like an agenda to encourage the use of less privacy conscious operating systems. If the OS can be remotely updated, nothing stops bad actor from updating particular phone with a keylogger to bypass any end to end messenger a target is using and so on. Remote update is a great option if it is initiated from a trusted source.
Let's assume that you are correct. I am now holding a perfectly-fine Samsung Note 3, purchased new in 2013 and has never had a broken screen. To which trusted source can I initiate a remote update?
LineageOS allows users to stay on a reasonably secure up-to-date android version. Unfortunately, the initial install process is not user friendly enough for the average person who owns one of these devices. That's the barrier to entry. But once installed, this wouldn't be a problem.
Honestly, Google should require as part of the Play Store certification that vendors ship their goddamn drivers in a quality that is acceptable to upstream Linux, or at least get them to staging quality.
The situation exists entirely because Google created it, they designed a HAL (hardware abstraction layer) with the intention of letting device manufacturers design devices while skirting GPLv2 requirements.
It's infuriating that device manufacturers refuse not only to provide a viable update scheme for their devices, but that they lock out any chance for a FOSS solution to the problem either.
If you have a security incident you can't wait for the expiration date anyway, you need to revoke. And hopefully users have a mechanism to note the revocation in time.
So expiration dates help to avoid using weak algorithms forever. But does that need to be done on a fixed date per certificate? Wouldn't that be a more gradual approach in the libraries? And of course they are not updated they get less secure as time goes by. And sooner or later compatibility issues will do the job. E. g. most Nokia Symbian phones don't do https any more, because all they can handle is SHA-1 certificate signatures. That was a gradual decline in functionality.